Amazon FSx for NetApp ONTAP で SnapMirrorのカスケードをしてみた

Amazon FSx for NetApp ONTAP で SnapMirrorのカスケードをしてみた

移行や二次バックアップに
Clock Icon2023.12.27

この記事は公開されてから1年以上経過しています。情報が古い可能性がありますので、ご注意ください。

本番ワークロードのONTAPからSnapMirrorで移行するのは避けたい

こんにちは、のんピ(@non____97)です。

皆さんはオンプレミスONTAPからAmazon FSx for NetApp(以降FSxN)に以降する際に、本番ワークロードのONTAPを触るのは避けたいなと思ったことはありますか? 私はあります。

本番ワークロードに対して何か設定変更を加えたことによって、負荷がかかってしまい業務に影響があるのは避けたいところです。

おおよそ、重要なワークロードについてはSnapMirror/SnapVaultでSnapshotコピーを別のONTAPに転送していると考えられます。

そのような場面では転送先のONTAPからFSxNに対して、さらにSnapMirrorでSnapshotコピーを転送したいものです。果たしてそんなことはできるのでしょうか。

はい、SnapMirrorのカスケード構成を組むことで対応可能です。

ym_snapmirror_cascade

抜粋 : How to configure SnapMirror cascade relationship - NetApp Knowledge Base

SnapMirrorのカスケードを行うことで、ソースボリュームのパフォーマンスを低下させずに、別のストレージにSnapshotコピーを転送することができます。

実際に試してみました。

いきなりまとめ

  • カスケードSnapMirrorではsnapmirror protectを使用することができない
  • カスケードSnapMirrorでも重複排除やaggregateレイヤーのデータ削減効果を維持できる
  • 2つ目のボリュームで実行したStorage Efficiencyによる重複排除効果を3つ目以降のボリュームに維持するためには、転送元ボリューム -> 2つ目のボリューム -> 3つ目のボリュームで2回SnapMirrorによる転送を行う必要がある
  • 2つ目のボリュームで実行したInactive data compressionなどaggregateレイヤーのデータ削減効果を3つ目以降のボリュームに維持するためには、2つ目のボリューム -> 3つ目のボリュームでSnapMirrorを転送するだけでよい
    • ただし、転送元ボリュームのTiering PolicyがAllの場合は、aggregateレイヤーのデータ削減効果を維持できない
  • カスケードSnapMirrorの中間のボリュームが使用できなくなった場合、直接転送元ボリュームから最終的な転送先ボリューム間で同期を行うことができる
    • 中間ボリュームと最終的な転送先ボリューム間のSnapMirror relastionshipを削除した上で、転送元ボリュームと最終的な転送先ボリューム間でSnapMirror relasionshipの作成をし、snapmirror resyncによる再同期が必要
    • 再同期する際、共通のSnapshot以降に保持しているデータは削除される

検証環境

検証環境の構成は以下のとおりです。

Amazon FSx for NetApp ONTAP で SnapMirrorのカスケードをしてみた検証環境構成図

今回は同一FSxNファイルシステム内で完結していますが、それぞれ別のFSxNファイルシステム間でも問題なく動作します。

その際のクラスターピアリングやSVMピアリングはソースクラスターと最終的な転送先のクラスター間で行う必要はありません。直接SnapMirrorで転送するクラスター間でのみクラスターピアリングやSVMピアリングを行えば良いです。

SnapMirrorでは、SnapMirrorデスティネーションから別のシステムにデータをレプリケートできます。したがって、あるSnapMirror関係のデスティネーションであるシステムを、別のSnapMirror関係のソースとして使用できます。これは、1つのサイトから複数のサイトにデータを分散する場合に便利です。これをカスケードと呼びます。カスケードトポロジでは、プライマリクラスタとセカンダリクラスタの間、およびセカンダリクラスタとターシャリクラスタの間に、クラスタ間ネットワークを作成する必要があります。プライマリクラスタとターシャリクラスタの間にクラスタ間ネットワークは必要ありません。に、2個のホップから成るカスケード構成の例を示します。

図10_SnapMirrorカスケード

ONTAP 9向けSnapMirrorの構成およびベストプラクティス ガイド - P.26 カスケード関係

SVMは以下の3つを用意しました。

  • svm
  • svm2
  • svm3
::> vserver show
                               Admin      Operational Root
Vserver     Type    Subtype    State      State       Volume     Aggregate
----------- ------- ---------- ---------- ----------- ---------- ----------
svm         data    default    running    running     svm_root   aggr1
svm2        data    default    running    running     svm2_root  aggr1
svm3        data    default    running    running     svm3_root  aggr1
3 entries were displayed.

ボリュームは以下のとおりです。

::> volume show
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm       svm_root     aggr1        online     RW          1GB    972.3MB    0%
svm       vol1         aggr1        online     RW         16GB    15.20GB    0%
svm2      svm2_root    aggr1        online     RW          1GB    972.5MB    0%
svm3      svm3_root    aggr1        online     RW          1GB    972.5MB    0%
4 entries were displayed.

テスト用ファイルを書き込むvol1はTiering Policy Noneで、Storage Efficiencyは無効化しています。これはファイルを作成するタイミングで重複排除や圧縮などがかかるのを防ぐためです。

::> volume show -volume vol1 -fields tiering-policy
vserver volume tiering-policy
------- ------ --------------
svm     vol1   none

::> volume efficiency show -volume vol1
Vserver    Volume           State     Status      Progress           Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm        vol1             Disabled  Idle        Idle for 00:02:38  auto

::> set diag

Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state    policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ -------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Disabled auto   false       false              efficient               false         false           true                              false                           false

やってみる

テスト用ファイルの作成

テスト用のファイルを作成します

作成するファイルは以下の3つです。

  1. 1で埋めた1GiBのテキストファイル
  2. ランダムブロックの1GiBのバイナリファイル
  3. 2のコピー

1つ目のファイルは重複排除や圧縮が効くのかどうかを確認するためで、2つ目と3つ目のファイルは重複排除が効くのかを確認するためのファイルです。

作成時のログは以下のとおりです。

$ sudo mount -t nfs svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 nfs4   16G  320K   16G   1% /mnt/fsxn/vol1

$ head -c 1G /dev/zero | tr \\0 1 | sudo tee /mnt/fsxn/vol1/1_padding_file > /dev/null

$ ls -lh /mnt/fsxn/vol1/
total 1.1G
-rw-r--r--. 1 root root 1.0G Dec 22 01:46 1_padding_file

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.25093 s, 172 MB/s

$ sudo cp /mnt/fsxn/vol1/urandom_block_file /mnt/fsxn/vol1/urandom_block_file_copy

$ ls -lh /mnt/fsxn/vol1/
total 3.1G
-rw-r--r--. 1 root root 1.0G Dec 22 01:46 1_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file_copy

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 nfs4   16G  3.1G   13G  20% /mnt/fsxn/vol1

Storage Efficiencyを無効化しているので、重複排除は行われず3.1GB消費されていることが分かります。

ファイル作成後、ONTAP CLIからボリュームやaggregateの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 12.19GB   16GB            15.20GB 3.01GB 19%          0B                 0%                         0B                  3.01GB       20%                  -                 3.01GB              0B                                  0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            3.01GB       0%
             Footprint in Performance Tier             3.02GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        92.66MB       0%
      Delayed Frees                                    4.99MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  3.11GB       0%

      Effective Total Footprint                        3.11GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3.01GB
                               Total Physical Used: 2.13GB
                    Total Storage Efficiency Ratio: 1.42:1
Total Data Reduction Logical Used Without Snapshots: 3.01GB
Total Data Reduction Physical Used Without Snapshots: 2.13GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.42:1
Total Data Reduction Logical Used without snapshots and flexclones: 3.01GB
Total Data Reduction Physical Used without snapshots and flexclones: 2.13GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.42:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.02GB
Total Physical Used in FabricPool Performance Tier: 2.14GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 1.41:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.13GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.41:1
                Logical Space Used for All Volumes: 3.01GB
               Physical Space Used for All Volumes: 3.01GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 3.12GB
              Physical Space Used by the Aggregate: 2.13GB
           Space Saved by Aggregate Data Reduction: 1017MB
                 Aggregate Data Reduction SE Ratio: 1.47:1
              Logical Size Used by Snapshot Copies: 1.66MB
             Physical Size Used by Snapshot Copies: 708KB
              Snapshot Volume Data Reduction Ratio: 2.40:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.40:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

重複排除は行われていないようです。ただし、Space Saved by Aggregate Data Reduction: 1017MBであることからaggregateレイヤーで何かしらのデータ削減が行われていそうです。Storage Efficiencyを無効にしているため、圧縮やコンパクションも効いていつもりですが何でしょうか。

vol1 の Storage Efficiency有効化

テストファイルを作成したので、vol1でStorage Efficiencyを有効化します。

::*> volume efficiency on -vserver svm -volume vol1
Efficiency for volume "vol1" of Vserver "svm" is enabled.

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       false              efficient               false         false           true                              false                           false

vol1 の Inactive data compression有効化

vol1でInactive data compressionを有効化します。

Inactive data compressionを有効化する際はusing-auto-adaptive-compressionがtrueである必要があるので、事前にcompressionをtrueを指定します。

::*> volume efficiency inactive-data-compression show
Vserver    Volume Is-Enabled Scan Mode Progress Status  Compression-Algorithm
---------- ------ ---------- --------- -------- ------  ---------------------
svm        vol1   false      -         IDLE     SUCCESS
                                                        lzopro

::*> volume efficiency modify -vserver svm -volume vol1 -compression true

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false

::*> volume efficiency inactive-data-compression showVserver    Volume Is-Enabled Scan Mode Progress Status  Compression-Algorithm
---------- ------ ---------- --------- -------- ------  ---------------------
svm        vol1   false      -         IDLE     SUCCESS
                                                        lzopro

::*> volume efficiency inactive-data-compression modify -vserver svm -volume vol1 -is-enabled true

::*> volume efficiency inactive-data-compression show
Vserver    Volume Is-Enabled Scan Mode Progress Status  Compression-Algorithm
---------- ------ ---------- --------- -------- ------  ---------------------
svm        vol1   true       -         IDLE     SUCCESS
                                                        lzopro

SVMピアリングの作成

SVMピアリングを行います。

作成するSVMピアリングは以下の2つです。

  1. svm - svm2
  2. svm2 - svm3
::*> vserver peer create -vserver svm -peer-vserver svm2 -applications snapmirror

Info: 'vserver peer create' command is successful.


::*> vserver peer create -vserver svm2 -peer-vserver svm3 -applications snapmirror

Info: 'vserver peer create' command is successful.


::*> vserver peer show
            Peer        Peer                           Peering        Remote
Vserver     Vserver     State        Peer Cluster      Applications   Vserver
----------- ----------- ------------ ----------------- -------------- ---------
svm         svm2        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm2
svm2        svm         peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm
svm2        svm3        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm3
svm3        svm2        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm2
4 entries were displayed.

svm と svm2 のボリューム間のSnapMirror initialize

svm と svm2 のボリューム間のSnapMirror relationshipのinitializeを行います。

::*> snapmirror protect -path-list svm:vol1 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize true -support-tiering true -tiering-policy none
[Job 57] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Uninitialized
                                      Transferring   1.74GB    true    12/22 01:56:30

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Uninitialized
                                      Finalizing     2.06GB    true    12/22 01:56:45

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -

::*> snapmirror show -instance

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                    Newest Snapshot Timestamp: 12/22 01:56:24
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                  Exported Snapshot Timestamp: 12/22 01:56:24
                                      Healthy: true
                              Relationship ID: 4f726f26-a06d-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 0B
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:0
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/22 01:56:59
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:0:57
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 1
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 2229164624
               Total Transfer Time in Seconds: 36
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

initializeが完了しました。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 12.19GB   16GB            15.20GB 3.01GB 19%          0B                 0%                         0B                  3.01GB       20%                  -                 3.01GB              0B                                  0%
svm2    vol1_dst
               3.82GB
                    615.1MB   3.82GB          3.63GB  3.03GB 83%          0B                 0%                         3GB                 3.03GB       83%                  -                 3.03GB              0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            3.03GB       0%
             Footprint in Performance Tier             3.06GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        30.29MB       0%
      Delayed Frees                                   30.31MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  3.09GB       0%

      Effective Total Footprint                        3.09GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 12.08GB
                               Total Physical Used: 4.18GB
                    Total Storage Efficiency Ratio: 2.89:1
Total Data Reduction Logical Used Without Snapshots: 6.04GB
Total Data Reduction Physical Used Without Snapshots: 4.18GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.44:1
Total Data Reduction Logical Used without snapshots and flexclones: 6.04GB
Total Data Reduction Physical Used without snapshots and flexclones: 4.18GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.44:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 12.08GB
Total Physical Used in FabricPool Performance Tier: 4.20GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.88:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.04GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.19GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.44:1
                Logical Space Used for All Volumes: 6.04GB
               Physical Space Used for All Volumes: 6.04GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 6.17GB
              Physical Space Used by the Aggregate: 4.18GB
           Space Saved by Aggregate Data Reduction: 1.99GB
                 Aggregate Data Reduction SE Ratio: 1.47:1
              Logical Size Used by Snapshot Copies: 6.04GB
             Physical Size Used by Snapshot Copies: 1.01MB
              Snapshot Volume Data Reduction Ratio: 6138.38:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 6138.38:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           152KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           148KB     0%    0%
2 entries were displayed.

svm2にvol1_dstというボリュームが作成されています。転送元ボリュームであるvol1はStorage Efficiencyが有効ですが、こちらのボリュームはStorage Efficiencyが無効になっています。

また、Space Saved by Aggregate Data Reduction1.99GBと1GB弱増えていました。このことからSnapMirrorがaggregateレイヤーでのデータ削減効果を維持してくれていることが分かります。

vol1_dst で Storage Efficiencyを有効化

svm2 のボリュームであるvol1_dstでStorage Efficiencyを有効化します。

::*> volume efficiency on -vserver svm2 -volume vol1_dst
Efficiency for volume "vol1_dst" of Vserver "svm2" is enabled.

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
2 entries were displayed.

vol1_dst で Inactive data compressionを実行

svm2のボリュームであるvol1_dstでInactive data compressionを実行します。

重複排除よりも先に圧縮したのは、圧縮効果をわかりやすくするためです。仮に重複排除後に圧縮をすると、1で埋め尽くしているテキストファイルは重複排除によりほとんどのデータが削減されてしまい、圧縮が行われたのか分かりづらくなってしまうのではないかと考えます。

::*> volume efficiency inactive-data-compression start -vserver svm2 -volume vol1_dst -inactive-days 0
Inactive data compression scan started on volume "vol1_dst" in Vserver "svm2"

::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 526232
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 160
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 22
             Time since Last Inactive Data Compression Scan ended(sec): 21
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 21
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 66%

Number of Compression Done Blocks: 160であることからあまり圧縮されていませんね。

注目すべきポイントはIncompressible Data Percentage: 66%になっているところです。これは全体の66%のデータを圧縮できなかったという意味です。3GiB書き込んだうちの2GiBがランダムなデータブロックのバイナリファイルです。そして、以下記事でも確認しているとおり/dev/urandomで生成したランダムなデータブロックのバイナリファイルはInactive data compressionではほとんど圧縮できません。

そのため、おそらく1で埋めたテキストファイルは圧縮済みで圧縮しなかったのではないかと推測します。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
2 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 12.19GB   16GB            15.20GB 3.01GB 19%          0B                 0%                         0B                  3.01GB       20%                  -                 3.01GB              0B                                  0%
svm2    vol1_dst
               3.82GB
                    615.0MB   3.82GB          3.63GB  3.03GB 83%          0B                 0%                         3GB                 3.03GB       83%                  -                 3.03GB              0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            3.03GB       0%
             Footprint in Performance Tier             3.06GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        30.29MB       0%
      Delayed Frees                                   31.34MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  3.09GB       0%

      Footprint Data Reduction                         1.01GB       0%
           Auto Adaptive Compression                   1.01GB       0%
      Effective Total Footprint                        2.08GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 12.06GB
                               Total Physical Used: 4.25GB
                    Total Storage Efficiency Ratio: 2.84:1
Total Data Reduction Logical Used Without Snapshots: 6.02GB
Total Data Reduction Physical Used Without Snapshots: 4.25GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.42:1
Total Data Reduction Logical Used without snapshots and flexclones: 6.02GB
Total Data Reduction Physical Used without snapshots and flexclones: 4.25GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.42:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 12.09GB
Total Physical Used in FabricPool Performance Tier: 4.29GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.82:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.04GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.29GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.41:1
                Logical Space Used for All Volumes: 6.02GB
               Physical Space Used for All Volumes: 6.02GB
               Space Saved by Volume Deduplication: 0B
Space Saved by Volume Deduplication and pattern detection: 0B
                Volume Deduplication Savings ratio: 1.00:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.00:1
               Logical Space Used by the Aggregate: 6.23GB
              Physical Space Used by the Aggregate: 4.25GB
           Space Saved by Aggregate Data Reduction: 1.99GB
                 Aggregate Data Reduction SE Ratio: 1.47:1
              Logical Size Used by Snapshot Copies: 6.04GB
             Physical Size Used by Snapshot Copies: 2.25MB
              Snapshot Volume Data Reduction Ratio: 2751.27:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 2751.27:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           152KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           160KB     0%    0%
2 entries were displayed.

Space Saved by Aggregate Data Reductionの結果は変わっていませんが、volume show-footprintAuto Adaptive Compressionが1.01GBとなっています。

このことから元々aggregateレイヤーで圧縮が行われていたと推測されます。

vol1_dst で Storage Efficiencyを実行

次にsvm2のボリュームであるvol1_dstでStorage Efficiencyを実行します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 02:55:16 0B           0%              0B             3.01GB
svm2    vol1_dst
               Enabled Idle for 00:08:51 0B           0%              0B             3.03GB
2 entries were displayed.

::*> volume efficiency start -vserver svm2 -volume vol1_dst -scan-old-data

Warning: This operation scans all of the data in volume "vol1_dst" of Vserver "svm2". It might take a significant time, and degrade performance during that time.
Do you want to continue? {y|n}: y
The efficiency operation for volume "vol1_dst" of Vserver "svm2" has started.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 02:55:36 0B           0%              0B             3.01GB
svm2    vol1_dst
               Enabled 186368 KB Scanned 0B           0%              0B             3.03GB
2 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 02:55:49 0B           0%              0B             3.01GB
svm2    vol1_dst
               Enabled 2635776 KB Scanned
                                         0B           0%              0B             3.03GB
2 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 02:56:04 0B           0%              0B             3.01GB
svm2    vol1_dst
               Enabled 5515264 KB Scanned
                                         0B           0%              0B             3.03GB
2 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 02:57:11 0B           0%              0B             3.01GB
svm2    vol1_dst
               Enabled 3387724 KB (64%) Done
                                         0B           0%              120KB          3.03GB
2 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 02:57:58 0B           0%              0B             3.01GB
svm2    vol1_dst
               Enabled Idle for 00:00:18 6GB          0%              420KB          3.03GB
2 entries were displayed.

6GB分操作をしたようです。logical-data-sizeにあるとおり、ボリュームに書き込まれたデータサイズは3GBほどであるのに、6GB分処理したのは謎です。

ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 12.19GB   16GB            15.20GB 3.01GB 19%          0B                 0%                         0B                  3.01GB       20%                  -                 3.01GB              0B                                  0%
svm2    vol1_dst
               3.58GB
                    541.2MB   3.58GB          3.40GB  2.87GB 84%          1.89GB             40%                        1.06GB              4.75GB       140%                 -                 3.02GB              0B                                  0%
2 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            3.05GB       0%
             Footprint in Performance Tier             3.08GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        30.29MB       0%
      Deduplication Metadata                           6.02MB       0%
           Deduplication                               6.02MB       0%
      Delayed Frees                                   26.04MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  3.11GB       0%

      Footprint Data Reduction                         1.02GB       0%
           Auto Adaptive Compression                   1.02GB       0%
      Effective Total Footprint                        2.09GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 12.06GB
                               Total Physical Used: 4.41GB
                    Total Storage Efficiency Ratio: 2.73:1
Total Data Reduction Logical Used Without Snapshots: 6.02GB
Total Data Reduction Physical Used Without Snapshots: 3.09GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.95:1
Total Data Reduction Logical Used without snapshots and flexclones: 6.02GB
Total Data Reduction Physical Used without snapshots and flexclones: 3.09GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.95:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 12.09GB
Total Physical Used in FabricPool Performance Tier: 4.46GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.71:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.05GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.14GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.92:1
                Logical Space Used for All Volumes: 6.02GB
               Physical Space Used for All Volumes: 4.12GB
               Space Saved by Volume Deduplication: 1.89GB
Space Saved by Volume Deduplication and pattern detection: 1.89GB
                Volume Deduplication Savings ratio: 1.46:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.46:1
               Logical Space Used by the Aggregate: 6.40GB
              Physical Space Used by the Aggregate: 4.41GB
           Space Saved by Aggregate Data Reduction: 1.99GB
                 Aggregate Data Reduction SE Ratio: 1.45:1
              Logical Size Used by Snapshot Copies: 6.04GB
             Physical Size Used by Snapshot Copies: 1.91GB
              Snapshot Volume Data Reduction Ratio: 3.16:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.16:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           152KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    53%   63%
2 entries were displayed.

重複排除量が1.89GBとなっていますね。

ただし、Total Physical Usedが4.25GBから4.41GBに僅かならが増えています。そのため、重複排除したデータブロックは物理的には解放されていないことが分かります。

svm3 に SnapMirrorの転送先ボリュームの作成

svm3にSnapMirrorの転送先ボリュームを作成します。

カスケードSnapMirrorではsnapmirror protectをサポートしていません。

snapmirror protect SnapMirrorカスケードの作成 で は サポートされていません

SnapMirror カスケード関係の設定方法 - NetApp

そのため、以下の操作は手動で行う必要があります。

  • カスケードSnapMirrorの転送先ボリュームの作成
  • SnapMirror relationshipの作成
  • SnapMirror initializeの実行

ボリュームはvol1_dst_dstという名前で作成します。

::*> volume create -vserver svm3 -volume vol1_dst_dst -aggregate aggr1 -state online -type DP -size 4GB -tiering-policy none
[Job 63] Job succeeded: Successful

::*> volume show -volume vol1* -fields type, autosize-mode, max-autosize
vserver volume max-autosize autosize-mode type
------- ------ ------------ ------------- ----
svm     vol1   19.20GB      off           RW
svm2    vol1_dst
               100TB        grow_shrink   DP
svm3    vol1_dst_dst
               100TB        grow_shrink   DP
3 entries were displayed.

作成したボリュームのStorage Efficiencyを有効化します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Disabled
                       -      false       false              -                       false         true            false                             false                           false
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance
There are no entries matching your query.

::*> volume efficiency on -vserver svm3 -volume vol1_dst_dst
Efficiency for volume "vol1_dst_dst" of Vserver "svm3" is enabled.

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled -      false       false              -                       false         true            false                             false                           false
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance
There are no entries matching your query.

svm2 と svm3 のボリューム間のSnapMirror initialize

svm2 と svm3 のボリューム間のSnapMirror relationshipのinitializeを行います。

まず、SnapMirror relasionshipの作成をします。

::*> snapmirror create -source-path svm2:vol1_dst -destination-vserver svm3 -destination-volume vol1_dst_dst -policy MirrorAllSnapshots
Operation succeeded: snapmirror create for the relationship with destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Uninitialized
                                      Idle           -         true    -
2 entries were displayed.

次にinitializeを行います。

::*> snapmirror initialize -destination-path svm3:vol1_dst_dst -source-path svm2:vol1_dst
Operation is queued: snapmirror initialize of destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Uninitialized
                                      Transferring   0B        true    12/22 04:44:26
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Uninitialized
                                      Transferring   376.5MB   true    12/22 04:44:39
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst_dst

                                  Source Path: svm2:vol1_dst
                               Source Cluster: -
                               Source Vserver: svm2
                                Source Volume: vol1_dst
                             Destination Path: svm3:vol1_dst_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                    Newest Snapshot Timestamp: 12/22 01:56:24
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                  Exported Snapshot Timestamp: 12/22 01:56:24
                                      Healthy: true
                              Relationship ID: b0b2694d-a084-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 0B
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:0
                           Last Transfer From: svm2:vol1_dst
                  Last Transfer End Timestamp: 12/22 04:45:00
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 2:49:31
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 1
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 2229260985
               Total Transfer Time in Seconds: 34
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 12.19GB   16GB            15.20GB 3.01GB 19%          0B                 0%                         0B                  3.01GB       20%                  -                 3.01GB              0B                   0%
svm2    vol1_dst
               3.58GB
                    541.2MB   3.58GB          3.40GB  2.87GB 84%          1.89GB             40%                        1.06GB              4.75GB       140%                 -                 3.02GB              0B                   0%
svm3    vol1_dst_dst
               4GB  954.2MB   4GB             4GB     3.07GB 76%          0B                 0%                         3GB                 3.07GB       77%                  -                 3.07GB              0B                   0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            3.07GB       0%
             Footprint in Performance Tier             3.10GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        31.23MB       0%
      Delayed Frees                                   34.50MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  3.13GB       0%

      Effective Total Footprint                        3.13GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 18.16GB
                               Total Physical Used: 6.33GB
                    Total Storage Efficiency Ratio: 2.87:1
Total Data Reduction Logical Used Without Snapshots: 9.04GB
Total Data Reduction Physical Used Without Snapshots: 5.03GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.80:1
Total Data Reduction Logical Used without snapshots and flexclones: 9.04GB
Total Data Reduction Physical Used without snapshots and flexclones: 5.03GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.80:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 18.23GB
Total Physical Used in FabricPool Performance Tier: 6.43GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.83:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 9.12GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 5.13GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.78:1
                Logical Space Used for All Volumes: 9.04GB
               Physical Space Used for All Volumes: 7.15GB
               Space Saved by Volume Deduplication: 1.89GB
Space Saved by Volume Deduplication and pattern detection: 1.89GB
                Volume Deduplication Savings ratio: 1.26:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.26:1
               Logical Space Used by the Aggregate: 9.31GB
              Physical Space Used by the Aggregate: 6.33GB
           Space Saved by Aggregate Data Reduction: 2.98GB
                 Aggregate Data Reduction SE Ratio: 1.47:1
              Logical Size Used by Snapshot Copies: 9.11GB
             Physical Size Used by Snapshot Copies: 1.91GB
              Snapshot Volume Data Reduction Ratio: 4.76:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 4.76:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           152KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    53%   63%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           144KB     0%    0%
3 entries were displayed.

重複排除が効いているとvol1_dstから転送しているにも関わらず、vol1_dst_dstでは重複排除が一切効いていないようです。

vol1_dst_dst上のSnapshotを確認すると、vol1vol1_dst_dstと全く同じものでした。vol1_dstvol1_dst_dstのSnapMirror relationshipのSnapMirror policyはMirrorAllSnapshotsです。こちらのSnapMirror policyにはsm_createdのルールが存在しています。

::*> snapmirror policy show
Vserver Policy             Policy Number         Transfer
Name    Name               Type   Of Rules Tries Priority Comment
------- ------------------ ------ -------- ----- -------- ----------
FsxId06a2871837f70a7fc
        Asynchronous       mirror-vault  3     8  normal  A unified Asynchronous SnapMirror and SnapVault policy for mirroring the latest active file system and daily and weekly Snapshot copies with an hourly transfer schedule.
  SnapMirror Label: sm_created                         Keep:       1
                    daily                                          7
                    weekly                                        52
                                                 Total Keep:      60

FsxId06a2871837f70a7fc
        AutomatedFailOver  automated-failover
                                         1     8  normal  Policy for SnapMirror Synchronous with zero RTO guarantee where client I/O will not be disrupted on replication failure.
  SnapMirror Label: sm_created                         Keep:       1
                                                 Total Keep:       1

FsxId06a2871837f70a7fc
        CloudBackupDefault vault         1     8  normal  Vault policy with daily rule.
  SnapMirror Label: daily                              Keep:       7
                                                 Total Keep:       7

FsxId06a2871837f70a7fc
        Continuous         continuous    0     8  normal  Policy for S3 bucket mirroring.
  SnapMirror Label: -                                  Keep:       -
                                                 Total Keep:       0

FsxId06a2871837f70a7fc
        DPDefault          async-mirror  2     8  normal  Asynchronous SnapMirror policy for mirroring all Snapshot copies and the latest active file system.
  SnapMirror Label: sm_created                         Keep:       1
                    all_source_snapshots                           1
                                                 Total Keep:       2

FsxId06a2871837f70a7fc
        DailyBackup        vault         1     8  normal  Vault policy with a daily rule and a daily transfer schedule.
  SnapMirror Label: daily                              Keep:       7
                                                 Total Keep:       7

FsxId06a2871837f70a7fc
        Migrate            migrate       2     8  normal  Policy for Migrate
  SnapMirror Label: sm_created                         Keep:       1
                    all_source_snapshots                           1
                                                 Total Keep:       2

FsxId06a2871837f70a7fc
        MirrorAllSnapshots async-mirror  2     8  normal  Asynchronous SnapMirror policy for mirroring all Snapshot copies and the latest active file system.
  SnapMirror Label: sm_created                         Keep:       1
                    all_source_snapshots                           1
                                                 Total Keep:       2

FsxId06a2871837f70a7fc
        MirrorAllSnapshotsDiscardNetwork
                           async-mirror  2     8  normal  Asynchronous SnapMirror policy for mirroring all Snapshot copies and the latest active file system excluding the network configurations.
   Discard Configs: network
  SnapMirror Label: sm_created                         Keep:       1
                    all_source_snapshots                           1
                                                 Total Keep:       2

FsxId06a2871837f70a7fc
        MirrorAndVault     mirror-vault  3     8  normal  A unified Asynchronous SnapMirror and SnapVault policy for mirroring the latest active file system and daily and weekly Snapshot copies.
  SnapMirror Label: sm_created                         Keep:       1
                    daily                                          7
                    weekly                                        52
                                                 Total Keep:      60

FsxId06a2871837f70a7fc
        MirrorAndVaultDiscardNetwork
                           mirror-vault  3     8  normal  A unified Asynchronous SnapMirror and SnapVault policy for mirroring the latest active file system and daily and weekly Snapshot copies excluding the network configurations.
   Discard Configs: network
  SnapMirror Label: sm_created                         Keep:       1
                    daily                                          7
                    weekly                                        52
                                                 Total Keep:      60


Vserver Policy             Policy Number         Transfer
Name    Name               Type   Of Rules Tries Priority Comment
------- ------------------ ------ -------- ----- -------- ----------
FsxId06a2871837f70a7fc
        MirrorLatest       async-mirror  1     8  normal  Asynchronous SnapMirror policy for mirroring the latest active file system.
  SnapMirror Label: sm_created                         Keep:       1
                                                 Total Keep:       1

FsxId06a2871837f70a7fc
        SnapCenterSync     sync-mirror   2     8  normal  Policy for SnapMirror Synchronous for Snap Center with Application Created Snapshot configuration.
  SnapMirror Label: sm_created                         Keep:       2
                    app_consistent                                 1
                                                 Total Keep:       3

FsxId06a2871837f70a7fc
        StrictSync         strict-sync-mirror
                                         1     8  normal  Policy for SnapMirror Synchronous where client access will be disrupted on replication failure.
  SnapMirror Label: sm_created                         Keep:       2
                                                 Total Keep:       2

FsxId06a2871837f70a7fc
        Sync               sync-mirror   1     8  normal  Policy for SnapMirror Synchronous where client access will not be disrupted on replication failure.
  SnapMirror Label: sm_created                         Keep:       2
                                                 Total Keep:       2

FsxId06a2871837f70a7fc
        Unified7year       mirror-vault  4     8  normal  Unified SnapMirror policy with 7year retention.
  SnapMirror Label: sm_created                         Keep:       1
                    daily                                          7
                    weekly                                        52
                    monthly                                       84
                                                 Total Keep:     144

FsxId06a2871837f70a7fc
        XDPDefault         vault         2     8  normal  Vault policy with daily and weekly rules.
  SnapMirror Label: daily                              Keep:       7
                    weekly                                        52
                                                 Total Keep:      59

17 entries were displayed.

そのため、「SnapMirror時にSnapshotを作成してくれるのかな?」と思ったのですが、そういう訳ではないようです。

これはSnapMirrorの転送元ボリュームのボリュームタイプがRWではなく、DPであるためです。DPのボリュームでSnapshotを作成しようとすると、以下のようにエラーになります。

::*> snapshot create -vserver svm2 -volume vol1_dst -snapshot test.2023-12-22_0459 -snapmirror-label test

Error: command failed: Snapshot copies can only be created on read/write (RW) volumes

Space Saved by Aggregate Data Reduction: 2.98GBとSnapMirror前後で1GB増加しています。そのため、SnapMirrorのカスケードにおいてもaggregateレイヤーのデータ削減効果は維持されると考えて良さそうです。

また、無効だったInactive data compressionは有効化されていますね。

カスケードSnapMirrorの再転送

テスト用ファイルの追加

カスケードSnapMirrorの再転送をしてみましょう。

下準備としてテスト用ファイルを追加します。

作成するファイルは以下の3つです。

  1. aで埋めた1GiBのテキストファイル
  2. ランダムブロックの1GiBのバイナリファイル
  3. 2のコピー

1つ目のファイルは重複排除や圧縮が効くのかどうかを確認するためで、2つ目と3つ目のファイルは重複排除が効くのかを確認するためのファイルです。

作成時のログは以下のとおりです。

$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file2 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.38572 s, 168 MB/s

$ sudo cp /mnt/fsxn/vol1/urandom_block_file2 /mnt/fsxn/vol1/urandom_block_file2_copy

$ head -c 1G /dev/zero | tr \\0 a | sudo tee /mnt/fsxn/vol1/a_padding_file > /dev/null

$ ls -lh /mnt/fsxn/vol1
total 6.1G
-rw-r--r--. 1 root root 1.0G Dec 22 01:46 1_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 05:28 a_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file
-rw-r--r--. 1 root root 1.0G Dec 22 05:02 urandom_block_file2
-rw-r--r--. 1 root root 1.0G Dec 22 05:02 urandom_block_file2_copy
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file_copy

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 nfs4   16G  5.4G  9.9G  35% /mnt/fsxn/vol1

Storage Efficiency、ボリューム、aggregateの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 9.89GB    16GB            15.20GB 5.31GB 34%          774.5MB            12%                        774.5MB             6.06GB       40%                  -                 6.06GB              0B                   0%
svm2    vol1_dst
               3.58GB
                    541.1MB   3.58GB          3.40GB  2.87GB 84%          1.89GB             40%                        1.06GB              4.75GB       140%                 -                 3.02GB              0B                   0%
svm3    vol1_dst_dst
               4GB  954.2MB   4GB             4GB     3.07GB 76%          0B                 0%                         3GB                 3.07GB       77%                  -                 3.07GB              0B                   0%
3 entries were displayed.

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.31GB       1%
             Footprint in Performance Tier             5.45GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        92.66MB       0%
      Deduplication Metadata                           6.02MB       0%
           Deduplication                               6.02MB       0%
      Delayed Frees                                   142.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.54GB       1%

      Effective Total Footprint                        5.54GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 21.19GB
                               Total Physical Used: 7.85GB
                    Total Storage Efficiency Ratio: 2.70:1
Total Data Reduction Logical Used Without Snapshots: 12.07GB
Total Data Reduction Physical Used Without Snapshots: 6.57GB
Total Data Reduction Efficiency Ratio Without Snapshots: 1.84:1
Total Data Reduction Logical Used without snapshots and flexclones: 12.07GB
Total Data Reduction Physical Used without snapshots and flexclones: 6.57GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.84:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 21.28GB
Total Physical Used in FabricPool Performance Tier: 7.97GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 2.67:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 12.17GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.70GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.81:1
                Logical Space Used for All Volumes: 12.07GB
               Physical Space Used for All Volumes: 9.42GB
               Space Saved by Volume Deduplication: 2.65GB
Space Saved by Volume Deduplication and pattern detection: 2.65GB
                Volume Deduplication Savings ratio: 1.28:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.28:1
               Logical Space Used by the Aggregate: 11.81GB
              Physical Space Used by the Aggregate: 7.85GB
           Space Saved by Aggregate Data Reduction: 3.96GB
                 Aggregate Data Reduction SE Ratio: 1.50:1
              Logical Size Used by Snapshot Copies: 9.11GB
             Physical Size Used by Snapshot Copies: 1.91GB
              Snapshot Volume Data Reduction Ratio: 4.76:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 4.76:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

774.5MBほど重複排除されています。インライン重複排除は無効にしているので、ポストプロセス重複排除が働いたのでしょう。

また、Space Saved by Aggregate Data Reduction: 3.96GBとaggregateレイヤーのデータ削減量が1GB増加していました。

svm と svm2 のボリューム間のSnapMirror update

svm と svm2 のボリューム間のSnapMirrorの差分転送を行います。

転送を行うにあたってSnapshotを事前に作成しておきます。意図は特にありません。

::*> snapshot create -vserver svm -volume vol1 -snapshot test.2023-12-22_0533 -snapmirror-label test

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           228KB     0%    0%
                  test.2023-12-22_0533                     156KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    53%   63%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           144KB     0%    0%
4 entries were displayed.

それではSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm2:vol1_dst
Operation is queued: snapmirror update of destination "svm2:vol1_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   0B        true    12/22 05:35:28
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:32:23 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled Pre-procesing transfer changelogs 2352608 KB
                                         6GB          0%              448KB          6.11GB
svm3    vol1_dst_dst
               Enabled Idle for 00:54:43 0B           0%              0B             3.07GB
3 entries were displayed.

::*>
::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:32:36 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled 1245968 KB (95%) Done
                                         6GB          1%              4.30MB         6.15GB
svm3    vol1_dst_dst
               Enabled Idle for 00:54:56 0B           0%              0B             3.07GB
3 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm2:vol1_dst

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                    Newest Snapshot Timestamp: 12/22 05:35:28
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                  Exported Snapshot Timestamp: 12/22 05:35:28
                                      Healthy: true
                              Relationship ID: 4f726f26-a06d-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 1.33GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:12
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/22 05:35:40
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:18:12
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 2
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 3658511072
               Total Transfer Time in Seconds: 48
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

1.33GB転送されたようです。これはテスト用ファイルとして3.0GB追加しましたが、重複排除により700MB、aggregateレイヤーのデータ削減により1GB削減されているためだと考えます。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:32:42 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled Idle for 00:00:02 45.10MB      1%              4.10MB         6.09GB
svm3    vol1_dst_dst
               Enabled Idle for 00:55:02 0B           0%              0B             3.07GB
3 entries were displayed.


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 41
             Time since Last Inactive Data Compression Scan ended(sec): 40
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 40
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 66%

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 9.89GB    16GB            15.20GB 5.31GB 34%          774.5MB            12%                        774.5MB             6.06GB       40%                  -                 6.06GB              0B                   0%
svm2    vol1_dst
               6.37GB
                    998.4MB   6.37GB          6.05GB  5.08GB 83%          3.47GB             41%                        2.48GB              8.53GB       141%                 -                 6.07GB              0B                   0%
svm3    vol1_dst_dst
               4GB  954.2MB   4GB             4GB     3.07GB 76%          0B                 0%                         3GB                 3.07GB       77%                  -                 3.07GB              0B                   0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.40GB       1%
             Footprint in Performance Tier             5.45GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        43.37MB       0%
      Deduplication Metadata                          12.30MB       0%
           Deduplication                              12.30MB       0%
      Delayed Frees                                   53.75MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.50GB       1%

      Footprint Data Reduction                         1.80GB       0%
           Auto Adaptive Compression                   1.80GB       0%
      Effective Total Footprint                        3.70GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 48.47GB
                               Total Physical Used: 9.25GB
                    Total Storage Efficiency Ratio: 5.24:1
Total Data Reduction Logical Used Without Snapshots: 15.10GB
Total Data Reduction Physical Used Without Snapshots: 7.43GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.03:1
Total Data Reduction Logical Used without snapshots and flexclones: 15.10GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.43GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.03:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 48.59GB
Total Physical Used in FabricPool Performance Tier: 9.42GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 5.16:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 15.22GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 7.60GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.00:1
                Logical Space Used for All Volumes: 15.10GB
               Physical Space Used for All Volumes: 10.87GB
               Space Saved by Volume Deduplication: 4.23GB
Space Saved by Volume Deduplication and pattern detection: 4.23GB
                Volume Deduplication Savings ratio: 1.39:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.39:1
               Logical Space Used by the Aggregate: 14.17GB
              Physical Space Used by the Aggregate: 9.25GB
           Space Saved by Aggregate Data Reduction: 4.92GB
                 Aggregate Data Reduction SE Ratio: 1.53:1
              Logical Size Used by Snapshot Copies: 33.38GB
             Physical Size Used by Snapshot Copies: 2.79GB
              Snapshot Volume Data Reduction Ratio: 11.98:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 11.98:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           228KB     0%    0%
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    30%   42%
                  test.2023-12-22_0533                   893.2MB    14%   25%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           156KB     0%    0%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           144KB     0%    0%
7 entries were displayed.

vol1_dstの重複排除量が1.89GBから3.47GBと約1.6GB分増えています。

vol1での重複排除量が約800MBであるので、それを差し引いても800MBほど追加の重複排除が発生しています。これはurandom_block_file2urandom_block_file2_copy間、もしくはa_padding_file内で重複排除が行われたものと考えられます。

追加の重複排除が行われたのはvol1_dsttest.2023-12-22_0533のサイズが893.2MBであることからも推測できます。

また、Space Saved by Aggregate Data Reductionが3.96GB から4.92GBと1GBほど増加していました。

vol1_dst で Storage Efficiencyを実行

svm2のボリュームであるvol1_dstでStorage Efficiencyを実行します。

::*> volume efficiency start -vserver svm2 -volume vol1_dst -scan-old-data

Warning: This operation scans all of the data in volume "vol1_dst" of Vserver "svm2". It might take a significant time, and degrade performance during that time.
Do you want to continue? {y|n}: y
The efficiency operation for volume "vol1_dst" of Vserver "svm2" has started.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:43:00 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled 3194880 KB Scanned
                                         45.10MB      0%              0B             6.05GB
svm3    vol1_dst_dst
               Enabled Idle for 01:05:20 0B           0%              0B             3.07GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:43:14 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled 5239752 KB Scanned
                                         45.10MB      0%              0B             6.05GB
svm3    vol1_dst_dst
               Enabled Idle for 01:05:34 0B           0%              0B             3.07GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:43:42 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled 4336 KB (0%) Done 45.10MB      0%              0B             6.05GB
svm3    vol1_dst_dst
               Enabled Idle for 01:06:02 0B           0%              0B             3.07GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:45:12 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled 6332144 KB (83%) Done
                                         45.10MB      0%              300KB          6.05GB
svm3    vol1_dst_dst
               Enabled Idle for 01:07:32 0B           0%              0B             3.07GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:46:02 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled Idle for 00:00:25 9.22GB       0%              420KB          6.06GB
svm3    vol1_dst_dst
               Enabled Idle for 01:08:22 0B           0%              0B             3.07GB
3 entries were displayed.

9.22GB分処理されました。

ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 9.89GB    16GB            15.20GB 5.31GB 34%          774.5MB            12%                        774.5MB             6.06GB       40%                  -                 6.06GB              0B                   0%
svm2    vol1_dst
               6.37GB
                    967.7MB   6.37GB          6.05GB  5.11GB 84%          3.89GB             43%                        2.09GB              8.98GB       148%                 -                 6.04GB              0B                   0%
svm3    vol1_dst_dst
               4GB  954.2MB   4GB             4GB     3.07GB 76%          0B                 0%                         3GB                 3.07GB       77%                  -                 3.07GB              0B                   0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.43GB       1%
             Footprint in Performance Tier             5.49GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        43.37MB       0%
      Deduplication Metadata                          12.04MB       0%
           Deduplication                              12.04MB       0%
      Delayed Frees                                   61.92MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.54GB       1%

      Footprint Data Reduction                         1.81GB       0%
           Auto Adaptive Compression                   1.81GB       0%
      Effective Total Footprint                        3.73GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 48.47GB
                               Total Physical Used: 9.33GB
                    Total Storage Efficiency Ratio: 5.20:1
Total Data Reduction Logical Used Without Snapshots: 15.10GB
Total Data Reduction Physical Used Without Snapshots: 7.19GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.10:1
Total Data Reduction Logical Used without snapshots and flexclones: 15.10GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.19GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.10:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 48.57GB
Total Physical Used in FabricPool Performance Tier: 9.47GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 5.13:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 15.19GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 7.34GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.07:1
                Logical Space Used for All Volumes: 15.10GB
               Physical Space Used for All Volumes: 10.45GB
               Space Saved by Volume Deduplication: 4.65GB
Space Saved by Volume Deduplication and pattern detection: 4.65GB
                Volume Deduplication Savings ratio: 1.44:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.44:1
               Logical Space Used by the Aggregate: 14.25GB
              Physical Space Used by the Aggregate: 9.33GB
           Space Saved by Aggregate Data Reduction: 4.92GB
                 Aggregate Data Reduction SE Ratio: 1.53:1
              Logical Size Used by Snapshot Copies: 33.38GB
             Physical Size Used by Snapshot Copies: 3.26GB
              Snapshot Volume Data Reduction Ratio: 10.25:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 10.25:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           228KB     0%    0%
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    30%   47%
                  test.2023-12-22_0533                   893.2MB    14%   29%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         480.5MB     7%   18%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           144KB     0%    0%
7 entries were displayed.

重複排除量は変わりありませんでした。一方でvol1_dstsnapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528のサイズが480.5MBとなっていました。内部的にデータブロックの場所が変更されたのではないかと推測します。

また、Auto Adaptive Compressionが1.02GBから1.81GBと増加していました。

vol1_dst で Inactive data compressionを実行

svm2のボリュームであるvol1_dstでInactive data compressionを実行します。

::*> volume efficiency inactive-data-compression start -vserver svm2 -volume vol1_dst -inactive-days 0
Inactive data compression scan started on volume "vol1_dst" in Vserver "svm2"

::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 7%
                                                  Phase1 L1s Processed: 6168
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 2220400
                                               Phase2 Blocks Processed: 157264
                                     Number of Cold Blocks Encountered: 295168
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 1784
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 4998
             Time since Last Inactive Data Compression Scan ended(sec): 4997
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 4997
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 66%


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 72%
                                                  Phase1 L1s Processed: 6168
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 2220400
                                               Phase2 Blocks Processed: 1608704
                                     Number of Cold Blocks Encountered: 344832
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 10008
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 5012
             Time since Last Inactive Data Compression Scan ended(sec): 5011
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 5011
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 66%


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 344832
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 10008
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 32
             Time since Last Inactive Data Compression Scan ended(sec): 11
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 11
                           Average time for Cold Data Compression(sec): 10
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 61%

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
3 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 9.89GB    16GB            15.20GB 5.31GB 34%          774.5MB            12%                        774.5MB             6.06GB       40%                  -                 6.06GB              0B                   0%
svm2    vol1_dst
               6.37GB
                    967.6MB   6.37GB          6.05GB  5.11GB 84%          3.89GB             43%                        2.09GB              8.98GB       148%                 -                 6.04GB              0B                   0%
svm3    vol1_dst_dst
               4GB  954.2MB   4GB             4GB     3.07GB 76%          0B                 0%                         3GB                 3.07GB       77%                  -                 3.07GB              0B                   0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.43GB       1%
             Footprint in Performance Tier             5.49GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        43.37MB       0%
      Deduplication Metadata                          12.04MB       0%
           Deduplication                              12.04MB       0%
      Delayed Frees                                   62.57MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.54GB       1%

      Footprint Data Reduction                         2.04GB       0%
           Auto Adaptive Compression                   2.04GB       0%
      Effective Total Footprint                        3.50GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 48.41GB
                               Total Physical Used: 9.23GB
                    Total Storage Efficiency Ratio: 5.25:1
Total Data Reduction Logical Used Without Snapshots: 15.03GB
Total Data Reduction Physical Used Without Snapshots: 7.11GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.11:1
Total Data Reduction Logical Used without snapshots and flexclones: 15.03GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.11GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.11:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 48.57GB
Total Physical Used in FabricPool Performance Tier: 9.44GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 5.15:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 15.19GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 7.32GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.08:1
                Logical Space Used for All Volumes: 15.03GB
               Physical Space Used for All Volumes: 10.39GB
               Space Saved by Volume Deduplication: 4.65GB
Space Saved by Volume Deduplication and pattern detection: 4.65GB
                Volume Deduplication Savings ratio: 1.45:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.45:1
               Logical Space Used by the Aggregate: 14.19GB
              Physical Space Used by the Aggregate: 9.23GB
           Space Saved by Aggregate Data Reduction: 4.96GB
                 Aggregate Data Reduction SE Ratio: 1.54:1
              Logical Size Used by Snapshot Copies: 33.38GB
             Physical Size Used by Snapshot Copies: 3.26GB
              Snapshot Volume Data Reduction Ratio: 10.25:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 10.25:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           228KB     0%    0%
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    30%   47%
                  test.2023-12-22_0533                   893.2MB    14%   29%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         480.5MB     7%   18%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           144KB     0%    0%
7 entries were displayed.

Space Saved by Aggregate Data Reductionが4.92GBから4.96GBと若干増加していました。Auto Adaptive Compressionについても1.81GBから2.04GBと増加していました。

svm2 と svm3 のボリューム間のSnapMirror update

svm と svm2 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm3:vol1_dst_dst
Operation is queued: snapmirror update of destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Transferring   642.0MB   true    12/22 05:55:29
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Finalizing     1.38GB    true    12/22 05:55:44
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst_dst

                                  Source Path: svm2:vol1_dst
                               Source Cluster: -
                               Source Vserver: svm2
                                Source Volume: vol1_dst
                             Destination Path: svm3:vol1_dst_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                    Newest Snapshot Timestamp: 12/22 05:35:28
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                  Exported Snapshot Timestamp: 12/22 05:35:28
                                      Healthy: true
                              Relationship ID: b0b2694d-a084-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 1.39GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:44
                           Last Transfer From: svm2:vol1_dst
                  Last Transfer End Timestamp: 12/22 05:56:10
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:20:56
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 2
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 3720662445
               Total Transfer Time in Seconds: 78
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

1.39GB転送されたようです。vol1からvol1_dstにSnapMirrorの差分同期をした時とほぼ同じサイズですね。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:53:29 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled Idle for 00:07:52 9.22GB       0%              420KB          6.06GB
svm3    vol1_dst_dst
               Enabled 3353944 KB (100%) Done
                                         0B           0%              780KB          6.27GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-sizevserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:55:34 1.76GB       7%              12.40MB        6.06GB
svm2    vol1_dst
               Enabled Idle for 00:09:57 9.22GB       0%              420KB          6.06GB
svm3    vol1_dst_dst
               Enabled Idle for 00:02:00 5.24GB       0%              836KB          6.06GB
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 136
             Time since Last Inactive Data Compression Scan ended(sec): 135
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 135
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 9.89GB    16GB            15.20GB 5.31GB 34%          774.5MB            12%                        774.5MB             6.06GB       40%                  -                 6.06GB              0B                   0%
svm2    vol1_dst
               6.37GB
                    967.6MB   6.37GB          6.05GB  5.11GB 84%          3.89GB             43%                        2.09GB              8.98GB       148%                 -                 6.04GB              0B                   0%
svm3    vol1_dst_dst
               6.56GB
                    1.12GB    6.56GB          6.56GB  5.44GB 82%          3.83GB             41%                        2.13GB              9.25GB       141%                 -                 6.04GB              0B                   0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.44GB       1%
             Footprint in Performance Tier             5.48GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        44.31MB       0%
      Deduplication Metadata                          12.30MB       0%
           Deduplication                              12.30MB       0%
      Delayed Frees                                   40.04MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.53GB       1%

      Effective Total Footprint                        5.53GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 63.60GB
                               Total Physical Used: 10.76GB
                    Total Storage Efficiency Ratio: 5.91:1
Total Data Reduction Logical Used Without Snapshots: 18.06GB
Total Data Reduction Physical Used Without Snapshots: 6.58GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.74:1
Total Data Reduction Logical Used without snapshots and flexclones: 18.06GB
Total Data Reduction Physical Used without snapshots and flexclones: 6.58GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.74:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 63.73GB
Total Physical Used in FabricPool Performance Tier: 10.96GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 5.81:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.19GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.78GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.68:1
                Logical Space Used for All Volumes: 18.06GB
               Physical Space Used for All Volumes: 9.58GB
               Space Saved by Volume Deduplication: 8.48GB
Space Saved by Volume Deduplication and pattern detection: 8.48GB
                Volume Deduplication Savings ratio: 1.89:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.89:1
               Logical Space Used by the Aggregate: 16.66GB
              Physical Space Used by the Aggregate: 10.76GB
           Space Saved by Aggregate Data Reduction: 5.89GB
                 Aggregate Data Reduction SE Ratio: 1.55:1
              Logical Size Used by Snapshot Copies: 45.54GB
             Physical Size Used by Snapshot Copies: 6.46GB
              Snapshot Volume Data Reduction Ratio: 7.04:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 7.04:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                           228KB     0%    0%
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    30%   47%
                  test.2023-12-22_0533                   893.2MB    14%   29%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         480.5MB     7%   18%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    29%   46%
                  test.2023-12-22_0533                    1.29GB    20%   37%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
9 entries were displayed.

vol1_dst_dstの重複排除量が3.83GBとvol1_dstとおおよそ同じ値になりました。

つまりはカスケードSnapMirrorで2つ目のボリュームで実行したStorage Efficiencyの結果を転送先ボリュームでも維持できることが分かります。

また、Space Saved by Aggregate Data Reductionは4.96GBから5.89GBと1GB増加していました。

既にSnapMirrorで転送したファイルと重複している場合に重複排除が効くのか

svm のボリュームのTiering PolicyをAllに変更

あまりカスケードSnapMirrorと関係ありませんが、既にSnapMirrorで転送したファイルと重複している場合に重複排除が効くのかを確認してみます。

ボリューム内にあるファイルをコピーした際に重複排除が行われないようにvol1のTiering PolicyをAllに変更します。

::*> volume modify -vserver svm -volume vol1 -tiering-policy all
Volume modify successful on volume vol1 of Vserver svm.

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.31GB       1%
             Footprint in Performance Tier             2.63GB      48%
             Footprint in FSxFabricpoolObjectStore
                                                       2.82GB      52%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        92.66MB       0%
      Deduplication Metadata                           6.02MB       0%
           Deduplication                               6.02MB       0%
      Delayed Frees                                   144.8MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.54GB       1%

      Footprint Data Reduction in capacity tier        1.13GB        -
      Effective Total Footprint                        4.42GB       0%

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.33GB       1%
             Footprint in Performance Tier            234.9MB       4%
             Footprint in FSxFabricpoolObjectStore
                                                       5.24GB      96%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        92.66MB       0%
      Deduplication Metadata                           6.02MB       0%
           Deduplication                               6.02MB       0%
      Delayed Frees                                   146.0MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.57GB       1%

      Footprint Data Reduction in capacity tier        1.94GB        -
      Effective Total Footprint                        3.63GB       0%

96%のデータがキャパシティプールストレージにTieringされました。

aggregateの情報を確認します。

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 63.60GB
                               Total Physical Used: 10.25GB
                    Total Storage Efficiency Ratio: 6.20:1
Total Data Reduction Logical Used Without Snapshots: 18.06GB
Total Data Reduction Physical Used Without Snapshots: 6.13GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.94:1
Total Data Reduction Logical Used without snapshots and flexclones: 18.06GB
Total Data Reduction Physical Used without snapshots and flexclones: 6.13GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.94:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 43.42GB
Total Physical Used in FabricPool Performance Tier: 7.23GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 6.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 12.38GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.13GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.96:1
                Logical Space Used for All Volumes: 18.06GB
               Physical Space Used for All Volumes: 9.58GB
               Space Saved by Volume Deduplication: 8.48GB
Space Saved by Volume Deduplication and pattern detection: 8.48GB
                Volume Deduplication Savings ratio: 1.89:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.89:1
               Logical Space Used by the Aggregate: 16.15GB
              Physical Space Used by the Aggregate: 10.25GB
           Space Saved by Aggregate Data Reduction: 5.89GB
                 Aggregate Data Reduction SE Ratio: 1.58:1
              Logical Size Used by Snapshot Copies: 45.54GB
             Physical Size Used by Snapshot Copies: 6.49GB
              Snapshot Volume Data Reduction Ratio: 7.02:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 7.02:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

Space Saved by Aggregate Data ReductionはTiering前後で変わっていませんね。そのため、Tieringをしてもaggregateで実行されたデータ削減の効果が維持されていることが分かります。

この処理は以下のアップデートによるものです。文章だけ見ると「キャパシティプールストレージ上で追加の圧縮をかけられるようになった」ように認識してしまいそうですが、「キャパシティプールストレージに階層化するタイミングで圧縮するようになった」というものです。

テスト用ファイルの追加

作成するファイルは以下の3つです。

  1. abcdeで埋めた1GiBのテキストファイル
  2. 既にボリューム上に存在しているランダムブロックの1GiBのバイナリファイルのコピー

作成時のログは以下のとおりです。

$ sudo cp /mnt/fsxn/vol1/urandom_block_file /mnt/fsxn/vol1/urandom_block_file_copy2

$ yes abcde | tr -d '\n' | sudo dd of=/mnt/fsxn/vol1/abcde_padding_file bs=1024 count=1024K
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.88529 s, 156 MB/s

$ ls -lh /mnt/fsxn/vol1
total 8.1G
-rw-r--r--. 1 root root 1.0G Dec 22 01:46 1_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 05:28 a_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 06:55 abcde_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file
-rw-r--r--. 1 root root 1.0G Dec 22 05:02 urandom_block_file2
-rw-r--r--. 1 root root 1.0G Dec 22 05:02 urandom_block_file2_copy
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file_copy
-rw-r--r--. 1 root root 1.0G Dec 22 06:41 urandom_block_file_copy2

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 nfs4   16G  7.4G  7.9G  49% /mnt/fsxn/vol1

ボリューム、aggregateの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 7.87GB    16GB            15.20GB 7.33GB 48%          774.5MB            9%                         774.5MB             8.09GB       53%                  -                 8.09GB              -                                   -
svm2    vol1_dst
               6.37GB
                    967.5MB   6.37GB          6.05GB  5.11GB 84%          3.89GB             43%                        2.09GB              8.98GB       148%                 -                 6.04GB              0B                                  0%
svm3    vol1_dst_dst
               6.56GB
                    1.12GB    6.56GB          6.56GB  5.44GB 82%          3.83GB             41%                        2.13GB              9.25GB       141%                 -                 6.04GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1


      Vserver : svm
      Volume  : vol1

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            7.36GB       1%
             Footprint in Performance Tier             1.26GB      17%
             Footprint in FSxFabricpoolObjectStore
                                                       6.25GB      83%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        92.66MB       0%
      Deduplication Metadata                          16.79MB       0%
           Deduplication                               8.78MB       0%
           Temporary Deduplication                     8.02MB       0%
      Delayed Frees                                   155.7MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  7.62GB       1%

      Footprint Data Reduction in capacity tier        1.94GB        -
      Effective Total Footprint                        5.68GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 65.61GB
                               Total Physical Used: 11.72GB
                    Total Storage Efficiency Ratio: 5.60:1
Total Data Reduction Logical Used Without Snapshots: 20.07GB
Total Data Reduction Physical Used Without Snapshots: 7.63GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.63:1
Total Data Reduction Logical Used without snapshots and flexclones: 20.07GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.63GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.63:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 46.43GB
Total Physical Used in FabricPool Performance Tier: 7.71GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 6.02:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 13.48GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.63GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.71:1
                Logical Space Used for All Volumes: 20.07GB
               Physical Space Used for All Volumes: 11.58GB
               Space Saved by Volume Deduplication: 8.48GB
Space Saved by Volume Deduplication and pattern detection: 8.48GB
                Volume Deduplication Savings ratio: 1.73:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.73:1
               Logical Space Used by the Aggregate: 18.60GB
              Physical Space Used by the Aggregate: 11.72GB
           Space Saved by Aggregate Data Reduction: 6.88GB
                 Aggregate Data Reduction SE Ratio: 1.59:1
              Logical Size Used by Snapshot Copies: 45.54GB
             Physical Size Used by Snapshot Copies: 6.49GB
              Snapshot Volume Data Reduction Ratio: 7.02:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 7.02:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

追加の重複排除は発生していないようです。

また、Space Saved by Aggregate Data Reductionが1GB分増えています。これはabcdeで埋められた1GiBのテキストファイル分だと考えます。

svm と svm2 のボリューム間のSnapMirror update

svm と svm2 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm2:vol1_dst
Operation is queued: snapmirror update of destination "svm2:vol1_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   0B        true    12/22 06:59:00
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   119.2MB   true    12/22 06:59:13
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   436.0MB   true    12/22 06:59:29
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   1.12GB    true    12/22 07:00:31
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   1.12GB    true    12/22 07:00:31
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   1.37GB    true    12/22 07:01:02
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   1.49GB    true    12/22 07:01:18
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Transferring   1.75GB    true    12/22 07:02:04
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress             last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- -------------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled 880112 KB (67%) Done 1.76GB       9%              32.43MB        8.09GB
svm2    vol1_dst
               Enabled Idle for 01:13:40    9.22GB       0%              448KB          6.10GB
svm3    vol1_dst_dst
               Enabled Idle for 01:05:43    5.24GB       0%              836KB          6.06GB
3 entries were displayed.

::*>
::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress             last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- -------------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled 895604 KB (68%) Done 1.76GB       9%              32.43MB        8.09GB
svm2    vol1_dst
               Enabled 830968 KB (39%) Done 9.22GB       0%              448KB          8.15GB
svm3    vol1_dst_dst
               Enabled Idle for 01:06:17    5.24GB       0%              836KB          6.06GB
3 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm2:vol1_dst

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                    Newest Snapshot Timestamp: 12/22 06:59:00
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                  Exported Snapshot Timestamp: 12/22 06:59:00
                                      Healthy: true
                              Relationship ID: 4f726f26-a06d-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 2.04GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:3:59
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/22 07:02:59
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:4:29
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 3
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 5853976980
               Total Transfer Time in Seconds: 287
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

転送サイズは2.04GBでした。

また、SnapMirrorの転送中にStorage Efficiencyを確認すると、vol1_dstでStorage Efficiencyが実行中でした。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress             last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- -------------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled 922412 KB (70%) Done 1.76GB       9%              32.43MB        8.09GB
svm2    vol1_dst
               Enabled Idle for 00:00:25    45.08MB      0%              700KB          8.10GB
svm3    vol1_dst_dst
               Enabled Idle for 01:07:11    5.24GB       0%              836KB          6.06GB
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 651
             Time since Last Inactive Data Compression Scan ended(sec): 650
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 650
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 61%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 7.86GB    16GB            15.20GB 7.33GB 48%          774.5MB            9%                         774.5MB             8.09GB       53%                  -                 8.09GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.14GB    7.54GB          7.16GB  6.02GB 84%          4.96GB             45%                        3.03GB              10.95GB      153%                 -                 8.08GB              0B                                  0%
svm3    vol1_dst_dst
               6.56GB
                    1.12GB    6.56GB          6.56GB  5.44GB 82%          3.83GB             41%                        2.13GB              9.25GB       141%                 -                 6.04GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            6.40GB       1%
             Footprint in Performance Tier             6.47GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        55.75MB       0%
      Deduplication Metadata                          12.30MB       0%
           Deduplication                              12.30MB       0%
      Delayed Frees                                   70.04MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  6.54GB       1%

      Footprint Data Reduction                         2.41GB       0%
           Auto Adaptive Compression                   2.41GB       0%
      Effective Total Footprint                        4.13GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 80.78GB
                               Total Physical Used: 11.94GB
                    Total Storage Efficiency Ratio: 6.77:1
Total Data Reduction Logical Used Without Snapshots: 22.08GB
Total Data Reduction Physical Used Without Snapshots: 7.82GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.82:1
Total Data Reduction Logical Used without snapshots and flexclones: 22.08GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.82GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.82:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 53.45GB
Total Physical Used in FabricPool Performance Tier: 7.91GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 6.76:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.40GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.81GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.78:1
                Logical Space Used for All Volumes: 22.08GB
               Physical Space Used for All Volumes: 12.53GB
               Space Saved by Volume Deduplication: 9.55GB
Space Saved by Volume Deduplication and pattern detection: 9.55GB
                Volume Deduplication Savings ratio: 1.76:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.76:1
               Logical Space Used by the Aggregate: 18.81GB
              Physical Space Used by the Aggregate: 11.94GB
           Space Saved by Aggregate Data Reduction: 6.88GB
                 Aggregate Data Reduction SE Ratio: 1.58:1
              Logical Size Used by Snapshot Copies: 58.70GB
             Physical Size Used by Snapshot Copies: 6.49GB
              Snapshot Volume Data Reduction Ratio: 9.04:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 9.04:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           164KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           148KB     0%    0%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    29%   46%
                  test.2023-12-22_0533                    1.29GB    20%   37%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
10 entries were displayed.

重複排除量が3.89GBから4.96GBと1GB増加していました。

このことから既にSnapMirrorで転送したファイルと重複している場合に重複排除が効くことが分かります。

Space Saved by Aggregate Data Reductionは変わりありませんでした。このことからキャパシティプールストレージ上のaggregateのデータ削減効果は、SnapMirrorで転送すると失われると考えられます。

vol1_dst で Inactive data compressionを実行

svm2のボリュームであるvol1_dstでInactive data compressionを実行します。

::*> volume efficiency inactive-data-compression start -vserver svm2 -volume vol1_dst -inactive-days 0
Inactive data compression scan started on volume "vol1_dst" in Vserver "svm2"

::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 57%
                                                  Phase1 L1s Processed: 3609
                                                    Phase1 Lns Skipped:
                                                                        L1:   790
                                                                        L2:    15
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 2854496
                                               Phase2 Blocks Processed: 1632009
                                     Number of Cold Blocks Encountered: 248712
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 235024
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 5144
             Time since Last Inactive Data Compression Scan ended(sec): 5124
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 5124
                           Average time for Cold Data Compression(sec): 10
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 61%


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 59%
                                                  Phase1 L1s Processed: 3609
                                                    Phase1 Lns Skipped:
                                                                        L1:   790
                                                                        L2:    15
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 2854496
                                               Phase2 Blocks Processed: 1684375
                                     Number of Cold Blocks Encountered: 249440
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 235040
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 5157
             Time since Last Inactive Data Compression Scan ended(sec): 5137
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 5137
                           Average time for Cold Data Compression(sec): 10
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 61%


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 69%
                                                  Phase1 L1s Processed: 3609
                                                    Phase1 Lns Skipped:
                                                                        L1:   790
                                                                        L2:    15
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 2854496
                                               Phase2 Blocks Processed: 1980084
                                     Number of Cold Blocks Encountered: 252176
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 235064
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 5183
             Time since Last Inactive Data Compression Scan ended(sec): 5163
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 5163
                           Average time for Cold Data Compression(sec): 10
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 61%


::*> volume efficiency inactive-data-compression show -volume vol1_dst -instance

                                                                Volume: vol1_dst
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 257856
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 235072
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 81
             Time since Last Inactive Data Compression Scan ended(sec): 20
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 20
                           Average time for Cold Data Compression(sec): 27
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 52%

235072ブロック追加で圧縮されたようです。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume,using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
3 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 7.86GB    16GB            15.20GB 7.33GB 48%          791.6MB            10%                        791.6MB             8.11GB       53%                  -                 8.11GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.14GB    7.54GB          7.16GB  6.02GB 84%          4.96GB             45%                        3.03GB              10.95GB      153%                 -                 8.08GB              0B                                  0%
svm3    vol1_dst_dst
               6.56GB
                    1.12GB    6.56GB          6.56GB  5.44GB 82%          3.83GB             41%                        2.13GB              9.25GB       141%                 -                 6.04GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            6.40GB       1%
             Footprint in Performance Tier             6.47GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        55.75MB       0%
      Deduplication Metadata                          12.30MB       0%
           Deduplication                              12.30MB       0%
      Delayed Frees                                   71.98MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  6.54GB       1%

      Footprint Data Reduction                         2.96GB       0%
           Auto Adaptive Compression                   2.96GB       0%
      Effective Total Footprint                        3.58GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 80.78GB
                               Total Physical Used: 11.56GB
                    Total Storage Efficiency Ratio: 6.99:1
Total Data Reduction Logical Used Without Snapshots: 22.08GB
Total Data Reduction Physical Used Without Snapshots: 7.68GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.88:1
Total Data Reduction Logical Used without snapshots and flexclones: 22.08GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.68GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.88:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 53.45GB
Total Physical Used in FabricPool Performance Tier: 7.55GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 7.08:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.40GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.68GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.91:1
                Logical Space Used for All Volumes: 22.08GB
               Physical Space Used for All Volumes: 12.52GB
               Space Saved by Volume Deduplication: 9.56GB
Space Saved by Volume Deduplication and pattern detection: 9.56GB
                Volume Deduplication Savings ratio: 1.76:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.76:1
               Logical Space Used by the Aggregate: 19.33GB
              Physical Space Used by the Aggregate: 11.56GB
           Space Saved by Aggregate Data Reduction: 7.77GB
                 Aggregate Data Reduction SE Ratio: 1.67:1
              Logical Size Used by Snapshot Copies: 58.70GB
             Physical Size Used by Snapshot Copies: 6.49GB
              Snapshot Volume Data Reduction Ratio: 9.04:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 9.04:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           272KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           148KB     0%    0%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    29%   46%
                  test.2023-12-22_0533                    1.29GB    20%   37%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
10 entries were displayed.

Space Saved by Aggregate Data Reductionが6.88GBから7.77GBと900MB圧縮されています。追加したテキストファイル分が圧縮されたのだと考えます。

vol1_dst で Storage Efficiencyを実行

次にsvm2のボリュームであるvol1_dstでStorage Efficiencyを実行します。

::*> volume efficiency start -vserver svm2 -volume vol1_dst -scan-old-data

Warning: This operation scans all of the data in volume "vol1_dst" of Vserver "svm2". It might take a significant time, and degrade performance during that time.
Do you want to continue? {y|n}: y
The efficiency operation for volume "vol1_dst" of Vserver "svm2" has started.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:08:46 1.70GB       9%              15.41MB        8.11GB
svm2    vol1_dst
               Enabled 5846532 KB Scanned
                                         45.08MB      0%              0B             8.07GB
svm3    vol1_dst_dst
               Enabled Idle for 01:28:42 5.24GB       0%              836KB          6.06GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:12:00 1.70GB       9%              15.41MB        8.11GB
svm2    vol1_dst
               Enabled 10234036 KB (95%) Done
                                         45.08MB      0%              540KB          8.07GB
svm3    vol1_dst_dst
               Enabled Idle for 01:31:56 5.24GB       0%              836KB          6.06GB
3 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:12:50 1.70GB       9%              15.41MB        8.11GB
svm2    vol1_dst
               Enabled Idle for 00:00:42 12.21GB      0%              540KB          8.08GB
svm3    vol1_dst_dst
               Enabled Idle for 01:32:46 5.24GB       0%              836KB          6.06GB
3 entries were displayed.

ボリューム、aggregateの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 7.86GB    16GB            15.20GB 7.33GB 48%          791.6MB            10%                        791.6MB             8.11GB       53%                  -                 8.11GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               6.56GB
                    1.12GB    6.56GB          6.56GB  5.44GB 82%          3.83GB             41%                        2.13GB              9.25GB       141%                 -                 6.04GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst


      Vserver : svm2
      Volume  : vol1_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            6.44GB       1%
             Footprint in Performance Tier             6.51GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        55.75MB       0%
      Deduplication Metadata                          12.04MB       0%
           Deduplication                              12.04MB       0%
      Delayed Frees                                   72.96MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  6.58GB       1%

      Footprint Data Reduction                         2.98GB       0%
           Auto Adaptive Compression                   2.98GB       0%
      Effective Total Footprint                        3.60GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 80.78GB
                               Total Physical Used: 11.90GB
                    Total Storage Efficiency Ratio: 6.79:1
Total Data Reduction Logical Used Without Snapshots: 22.08GB
Total Data Reduction Physical Used Without Snapshots: 7.93GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.79:1
Total Data Reduction Logical Used without snapshots and flexclones: 22.08GB
Total Data Reduction Physical Used without snapshots and flexclones: 7.93GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.79:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 53.43GB
Total Physical Used in FabricPool Performance Tier: 7.87GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 6.79:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.38GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.91GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.68:1
                Logical Space Used for All Volumes: 22.08GB
               Physical Space Used for All Volumes: 12.50GB
               Space Saved by Volume Deduplication: 9.58GB
Space Saved by Volume Deduplication and pattern detection: 9.58GB
                Volume Deduplication Savings ratio: 1.77:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.77:1
               Logical Space Used by the Aggregate: 19.67GB
              Physical Space Used by the Aggregate: 11.90GB
           Space Saved by Aggregate Data Reduction: 7.77GB
                 Aggregate Data Reduction SE Ratio: 1.65:1
              Logical Size Used by Snapshot Copies: 58.70GB
             Physical Size Used by Snapshot Copies: 6.57GB
              Snapshot Volume Data Reduction Ratio: 8.94:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 8.94:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           272KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    29%   46%
                  test.2023-12-22_0533                    1.29GB    20%   37%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           148KB     0%    0%
10 entries were displayed.

重複排除量は3.89GBから4.97GBと1GBほど増加していました。

svm2 と svm3 のボリューム間のSnapMirror update

svm2 と svm3 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm3:vol1_dst_dst
Operation is queued: snapmirror update of destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Finalizing     0B        true    12/22 07:31:30
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Finalizing     389.6MB   true    12/22 07:31:42
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst_dst

                                  Source Path: svm2:vol1_dst
                               Source Cluster: -
                               Source Vserver: svm2
                                Source Volume: vol1_dst
                             Destination Path: svm3:vol1_dst_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                    Newest Snapshot Timestamp: 12/22 06:59:00
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                  Exported Snapshot Timestamp: 12/22 06:59:00
                                      Healthy: true
                              Relationship ID: b0b2694d-a084-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 389.6MB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:29
                           Last Transfer From: svm2:vol1_dst
                  Last Transfer End Timestamp: 12/22 07:31:59
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:33:19
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 3
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 4129216624
               Total Transfer Time in Seconds: 107
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

転送量は389.6MBでした。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:15:44 1.70GB       9%              15.41MB        8.11GB
svm2    vol1_dst
               Enabled Idle for 00:03:36 12.21GB      0%              540KB          8.08GB
svm3    vol1_dst_dst
               Enabled Idle for 00:00:17 83.62MB      0%              900KB          8.09GB
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 18
             Time since Last Inactive Data Compression Scan ended(sec): 17
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 17
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 7.86GB    16GB            15.20GB 7.33GB 48%          791.6MB            10%                        791.6MB             8.11GB       53%                  -                 8.11GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               7.67GB
                    1.35GB    7.67GB          7.67GB  6.32GB 82%          5.08GB             45%                        2.88GB              11.37GB      148%                 -                 8.06GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            6.32GB       1%
             Footprint in Performance Tier             6.40GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        50.02MB       0%
      Deduplication Metadata                          13.21MB       0%
           Deduplication                              13.21MB       0%
      Delayed Frees                                   81.63MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  6.46GB       1%

      Effective Total Footprint                        6.46GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 90.85GB
                               Total Physical Used: 12.10GB
                    Total Storage Efficiency Ratio: 7.51:1
Total Data Reduction Logical Used Without Snapshots: 24.10GB
Total Data Reduction Physical Used Without Snapshots: 8.15GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.96:1
Total Data Reduction Logical Used without snapshots and flexclones: 24.10GB
Total Data Reduction Physical Used without snapshots and flexclones: 8.15GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.96:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 63.52GB
Total Physical Used in FabricPool Performance Tier: 8.08GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 7.86:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.40GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.15GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.95:1
                Logical Space Used for All Volumes: 24.10GB
               Physical Space Used for All Volumes: 13.27GB
               Space Saved by Volume Deduplication: 10.82GB
Space Saved by Volume Deduplication and pattern detection: 10.82GB
                Volume Deduplication Savings ratio: 1.82:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.82:1
               Logical Space Used by the Aggregate: 20.45GB
              Physical Space Used by the Aggregate: 12.10GB
           Space Saved by Aggregate Data Reduction: 8.36GB
                 Aggregate Data Reduction SE Ratio: 1.69:1
              Logical Size Used by Snapshot Copies: 66.76GB
             Physical Size Used by Snapshot Copies: 6.67GB
              Snapshot Volume Data Reduction Ratio: 10.01:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 10.01:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           272KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   39%
                  test.2023-12-22_0533                    1.29GB    17%   30%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         101.9MB     1%    3%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           144KB     0%    0%
11 entries were displayed.

重複排除量が5.08GBとなっています。問題なく重複排除量が維持されていますね。

加えてvol1_dst_dstのStorage Efficiencyのprogressが更新されていることからSnapMirrorで転送されたデータに対して重複排除が行われたことが分かります。

また、Space Saved by Aggregate Data Reductionは7.77GBから8.36GBと600MBほど増加しています。

svm と svm3 のボリューム間のSnapMirror

SVMピアリングの追加

次にsvm と svm3 のボリューム間のSnapMirrorを試します。

以下NetApp公式ドキュメントに記載の通り、カスケードSnapMirrorの中間のボリュームが使用できなくなった場合、直接転送元ボリュームから最終的な転送先ボリューム間で同期を行うことができるようです。

Bのボリュームが使用できなくなった場合は、CとAの間の関係を同期することでAの保護を継続できます。ベースライン転送を新たに実行する必要はありません。再同期処理が終わると、AはBを迂回してCと直接ミラー関係を持つことになります。ただし、再同期処理を実行するときは、再同期によってSnapshotコピーが削除され、カスケード内の関係の共通Snapshotコピーが失われる可能性があることに注意してください。その場合、関係には新しいベースラインが必要になります。

次の図に、ミラー-ミラー カスケード チェーンを示します。

3.GUID-BEE3AE68-CC2F-46DC-A725-817797123BF4-low

ミラー-ミラー カスケードの仕組み

実際に試しみましょう。

まずは、svmとsvm3間のSVMピアリングを追加します。

::*> vserver peer create -vserver svm -peer-vserver svm3 -applications snapmirror

Info: 'vserver peer create' command is successful.

::*> vserver peer show
            Peer        Peer                           Peering        Remote
Vserver     Vserver     State        Peer Cluster      Applications   Vserver
----------- ----------- ------------ ----------------- -------------- ---------
svm         svm2        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm2
svm         svm3        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm3
svm2        svm         peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm
svm2        svm3        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm3
svm3        svm         peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm
svm3        svm2        peered       FsxId0ab6f9b00824a187c
                                                       snapmirror     svm2
6 entries were displayed.

svm2 と svm3 のボリューム間のSnapMirror relasionshipの削除

svmとsvm3のボリューム間のSnapMirror relasionshipを作成するにあたって、svm2 と svm3 のボリューム間のSnapMirror relasionshipの削除を行います。

これはONTAPの仕様上、SnapMirror relationshpの転送先ボリュームを重複する(fan-in)ことはできないためです。SanpMirror relasionshipが存在する場合、状態がBroken-offであっても以下のようにエラーになります。

::*> snapmirror create -source-path svm:vol1 -destination-path svm3:vol1_dst_dst -policy MirrorAllSnapshots

Error: command failed: Relationship with destination svm3:vol1_dst_dst already exists.

SnapMirrorをカットオーバーします。

::*> snapmirror quiesce -destination-path svm3:vol1_dst_dst
Operation succeeded: snapmirror quiesce for destination "svm3:vol1_dst_dst".

::*> snapmirror break -destination-path svm3:vol1_dst_dst
Operation succeeded: snapmirror break for destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
svm2:vol1_dst
            XDP  svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
2 entries were displayed.

そして、SnapMirror relasionshipを削除します。

::*> snapmirror delete -destination-path svm3:vol1_dst_dst
Operation succeeded: snapmirror delete for the relationship with destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -

svm と svm3 のボリューム間のSnapMirrorの再同期

svm と svm3 のボリューム間のSnapMirrorの再同期をします。

まず、SnapMirror relationshipを作成します。

::*> snapmirror create -source-path svm:vol1 -destination-path svm3:vol1_dst_dst -policy MirrorAllSnapshots
Operation succeeded: snapmirror create for the relationship with destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
2 entries were displayed.

SnapMirrorの差分同期を行います。

::*> snapmirror update -destination-path svm3:vol1_dst_dst
Operation is queued: snapmirror update of destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         false   -
2 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst_dst

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm3:vol1_dst_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Broken-off
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                    Newest Snapshot Timestamp: 12/22 06:59:00
                            Exported Snapshot: -
                  Exported Snapshot Timestamp: -
                                      Healthy: false
                              Relationship ID: 27f77b8a-a09e-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: Destination svm3:vol1_dst_dst must be a data-protection volume.
                    Last Transfer Error Codes: 6619546
                           Last Transfer Size: -
      Last Transfer Network Compression Ratio: -
                       Last Transfer Duration: -
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/22 07:46:29
                             Unhealthy Reason: Transfer failed.
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: -
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 0
                     Number of Failed Updates: 1
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 0
               Total Transfer Time in Seconds: 0
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

svm3のvol1_dst_dstのボリュームタイプがDPではないとのことで失敗しました。

以下のKBにはsnapmirror resyncを実行してSnapMirror relationshipを再確立するようにと記載がありました。

  1. Run a snapmirror resync to re-establish the SnapMirror relationship
  2. Re-run the update and verify completion status

SnapMirror update fails with error "must be a data-protection volume" - NetApp Knowledge Base

実際に試します。

::*> snapmirror resync -destination-path svm3:vol1_dst_dst -source-path svm:vol1

Warning: All data newer than Snapshot copy snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 on volume svm3:vol1_dst_dst will be deleted.
Do you want to continue? {y|n}: y
Operation is queued: initiate snapmirror resync to destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst_dst

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm3:vol1_dst_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                    Newest Snapshot Timestamp: 12/22 07:48:01
                            Exported Snapshot: snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                  Exported Snapshot Timestamp: 12/22 07:48:01
                                      Healthy: true
                              Relationship ID: 27f77b8a-a09e-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: resync
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 3.27KB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:4
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/22 07:48:05
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:0:9
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 0
                     Number of Failed Updates: 1
                 Number of Successful Resyncs: 1
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 3352
               Total Transfer Time in Seconds: 4
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

再同期が完了しました。

Snapshotsnapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900のデータは削除されたようです。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 00:31:47 1.70GB       9%              15.41MB        8.11GB
svm2    vol1_dst
               Enabled Idle for 00:19:39 12.21GB      0%              540KB          8.08GB
svm3    vol1_dst_dst
               Disabled
                       Idle for 00:16:20 83.62MB      0%              900KB          8.06GB
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 952
             Time since Last Inactive Data Compression Scan ended(sec): 951
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 951
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: false
                                                             Threshold: 0
                                                 Threshold Upper Limit: 0
                                                 Threshold Lower Limit: 0
                                            Client Read history window: 0
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 7.86GB    16GB            15.20GB 7.33GB 48%          791.6MB            10%                        791.6MB             8.11GB       53%                  -                 8.11GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               5.24GB
                    844.8MB   5.24GB          5.24GB  4.41GB 84%          5.11GB             54%                        2.21GB              9.52GB       182%                 -                 8.06GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            4.41GB       0%
             Footprint in Performance Tier             4.51GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        50.02MB       0%
      Delayed Frees                                   104.2MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  4.56GB       1%

      Effective Total Footprint                        4.56GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 103.9GB
                               Total Physical Used: 12.54GB
                    Total Storage Efficiency Ratio: 8.29:1
Total Data Reduction Logical Used Without Snapshots: 24.07GB
Total Data Reduction Physical Used Without Snapshots: 9.54GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.52:1
Total Data Reduction Logical Used without snapshots and flexclones: 24.07GB
Total Data Reduction Physical Used without snapshots and flexclones: 9.54GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.52:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 68.72GB
Total Physical Used in FabricPool Performance Tier: 8.51GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 8.08:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.38GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 5.53GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.96:1
                Logical Space Used for All Volumes: 24.07GB
               Physical Space Used for All Volumes: 13.21GB
               Space Saved by Volume Deduplication: 10.86GB
Space Saved by Volume Deduplication and pattern detection: 10.86GB
                Volume Deduplication Savings ratio: 1.82:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.82:1
               Logical Space Used by the Aggregate: 20.18GB
              Physical Space Used by the Aggregate: 12.54GB
           Space Saved by Aggregate Data Reduction: 7.64GB
                 Aggregate Data Reduction SE Ratio: 1.61:1
              Logical Size Used by Snapshot Copies: 79.86GB
             Physical Size Used by Snapshot Copies: 4.82GB
              Snapshot Volume Data Reduction Ratio: 16.57:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 16.57:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           272KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                           136KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  test.2023-12-22_0533                    1.29GB    25%   30%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         101.9MB     2%    3%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.10MB     1%    1%
12 entries were displayed.

重複排除量やaggregateレイヤーでのデータ削減量に大きな変化はありませんでした。再同期するにあたってせっかくデータ削減したものが削除されてしまうのかと思っていたのですが、そんなことはないようです。

Snapshotsnapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801のサイズは若干増加しています。

また、vol1_dst_dstのStorage EfficiencyがDisableになっていました。snapmirror breakを実行してもStorage EfficinecyはEnableのままである認識です。そのため、snapmirror resyncのタイミングでDisableになったと考えます。

テスト用ファイルの追加

vol1からvol1_dst_dstで差分がある場合のSnapMirrorの動きが気になってきました。

vol1にテスト用ファイルを追加します。

$ yes ABCDE | tr -d '\n' | sudo dd of=/mnt/fsxn/vol1/ABCDE_padding_file bs=1024 count=1024K
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.32669 s, 147 MB/s

$ ls -lh /mnt/fsxn/vol1/ABCDE_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 07:53 /mnt/fsxn/vol1/ABCDE_padding_file

$ df -hT -t nfs4
Filesystem                                                                   Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1

svm と svm3 のボリューム間のSnapMirror update

svm と svm3 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm3:vol1_dst_dstOperation is queued: snapmirror update of destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Snapmirrored
                                      Transferring   303.1MB   true    12/22 07:55:40
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Snapmirrored
                                      Transferring   729.2MB   true    12/22 07:56:27
2 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Snapmirrored
                                      Idle           -         true    -
2 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst_dst

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm3:vol1_dst_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                    Newest Snapshot Timestamp: 12/22 07:55:07
                            Exported Snapshot: snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                  Exported Snapshot Timestamp: 12/22 07:55:07
                                      Healthy: true
                              Relationship ID: 27f77b8a-a09e-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 1.02GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:2:5
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/22 07:57:12
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:2:21
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 1
                     Number of Failed Updates: 1
                 Number of Successful Resyncs: 1
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 1097840840
               Total Transfer Time in Seconds: 129
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

転送量は1.02GBでした。きちんと差分のみを転送していることが分かります。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress           last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ------------------ ------------ --------------- -------------- -----------------
svm     vol1   Enabled 79120 KB (6%) Done 1.70GB       5%              25.41MB        9.12GB
svm2    vol1_dst
               Enabled Idle for 00:28:45  12.21GB      0%              540KB          8.08GB
svm3    vol1_dst_dst
               Disabled
                       Idle for 00:25:26  83.62MB      0%              900KB          9.07GB
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 1482
             Time since Last Inactive Data Compression Scan ended(sec): 1481
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 1481
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: false
                                                             Threshold: 0
                                                 Threshold Upper Limit: 0
                                                 Threshold Lower Limit: 0
                                            Client Read history window: 0
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          791.6MB            8%                         791.6MB             9.12GB       60%                  -                 9.12GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               6.41GB
                    1010MB    6.41GB          6.41GB  5.42GB 84%          5.11GB             49%                        3.21GB              10.53GB      164%                 -                 9.07GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            5.42GB       1%
             Footprint in Performance Tier             5.50GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        50.02MB       0%
      Delayed Frees                                   79.29MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.55GB       1%

      Effective Total Footprint                        5.55GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 116.0GB
                               Total Physical Used: 11.69GB
                    Total Storage Efficiency Ratio: 9.92:1
Total Data Reduction Logical Used Without Snapshots: 26.08GB
Total Data Reduction Physical Used Without Snapshots: 8.74GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.98:1
Total Data Reduction Logical Used without snapshots and flexclones: 26.08GB
Total Data Reduction Physical Used without snapshots and flexclones: 8.74GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.98:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 78.82GB
Total Physical Used in FabricPool Performance Tier: 7.67GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 10.28:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 17.41GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.73GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.68:1
                Logical Space Used for All Volumes: 26.08GB
               Physical Space Used for All Volumes: 15.22GB
               Space Saved by Volume Deduplication: 10.86GB
Space Saved by Volume Deduplication and pattern detection: 10.86GB
                Volume Deduplication Savings ratio: 1.71:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.71:1
               Logical Space Used by the Aggregate: 19.08GB
              Physical Space Used by the Aggregate: 11.69GB
           Space Saved by Aggregate Data Reduction: 7.39GB
                 Aggregate Data Reduction SE Ratio: 1.63:1
              Logical Size Used by Snapshot Copies: 89.94GB
             Physical Size Used by Snapshot Copies: 4.82GB
              Snapshot Volume Data Reduction Ratio: 18.66:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 18.66:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           168KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  test.2023-12-22_0533                    1.29GB    20%   25%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         101.9MB     2%    2%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           160KB     0%    0%
13 entries were displayed.

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume,using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
3 entries were displayed.

重複排除量やaggregateレイヤーでのデータ削減量に大きな変化はありませんでした。

また、DisableだからかSnapMirror転送後にvol1_dst_dstでのStorage Efficinecyは実行されていないようでした。SnapMirror initializeやresync後にStorage Efficiencyを有効化した方が良いかもしれません。

vol1_dst_dst のSnapshotを削除する

svm2 と svm3 のボリューム間のSnapMirrorのカットオーバー

vol1_dst_dstのSnapshotを削除して、ボリュームやaggregateの使用量が減ることを確認します。

下準備として、vm2 と svm3 のボリューム間のSnapMirrorをカットオーバーします。

::*> snapmirror quiesce -destination-path svm3:vol1_dst_dst
Operation succeeded: snapmirror quiesce for destination "svm3:vol1_dst_dst".

::*> snapmirror break -destination-path svm3:vol1_dst_dst
Operation succeeded: snapmirror break for destination "svm3:vol1_dst_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
2 entries were displayed.

Storage Efficiencyの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume,using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
3 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 1920
             Time since Last Inactive Data Compression Scan ended(sec): 1919
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 1919
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

Storage Efficiencyが有効になりました。また、ポリシーがautoになったり、インライン重複排除が有効になったりしています。

Snapshotの削除

vol1_dst_dstのSnapshotを少しづつ削除していきます。

Snapshotの一覧を確認します。

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           168KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  test.2023-12-22_0533                    1.29GB    20%   25%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         101.9MB     2%    2%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.94MB     0%    0%
13 entries were displayed.

最も古いtest.2023-12-22_0533を削除します。

::*> snapshot delete -vserver svm3 -volume vol1_dst_dst -snapshot test.2023-12-22_0533

Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "test.2023-12-22_0533" for volume "vol1_dst_dst" in Vserver "svm3" ? {y|n}: y

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           168KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         101.9MB     2%    2%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.95MB     0%    0%
12 entries were displayed.

ボリューム、aggregateの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          791.6MB            8%                         791.6MB             9.12GB       60%                  -                 9.12GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    795.2MB   4.90GB          4.90GB  4.12GB 84%          5.11GB             55%                        2.21GB              9.24GB       188%                 -                 9.07GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            4.12GB       0%
             Footprint in Performance Tier             5.07GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                             0B       0%
      Delayed Frees                                   969.2MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  5.07GB       1%

      Effective Total Footprint                        5.07GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 109.9GB
                               Total Physical Used: 11.83GB
                    Total Storage Efficiency Ratio: 9.29:1
Total Data Reduction Logical Used Without Snapshots: 26.08GB
Total Data Reduction Physical Used Without Snapshots: 9.58GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.72:1
Total Data Reduction Logical Used without snapshots and flexclones: 26.08GB
Total Data Reduction Physical Used without snapshots and flexclones: 9.58GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.72:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 72.70GB
Total Physical Used in FabricPool Performance Tier: 7.80GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 9.32:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 17.40GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 5.57GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.13:1
                Logical Space Used for All Volumes: 26.08GB
               Physical Space Used for All Volumes: 15.22GB
               Space Saved by Volume Deduplication: 10.86GB
Space Saved by Volume Deduplication and pattern detection: 10.86GB
                Volume Deduplication Savings ratio: 1.71:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.71:1
               Logical Space Used by the Aggregate: 18.56GB
              Physical Space Used by the Aggregate: 11.83GB
           Space Saved by Aggregate Data Reduction: 6.73GB
                 Aggregate Data Reduction SE Ratio: 1.57:1
              Logical Size Used by Snapshot Copies: 83.82GB
             Physical Size Used by Snapshot Copies: 3.53GB
              Snapshot Volume Data Reduction Ratio: 23.76:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 23.76:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

logical-usedが10.53GBから9.24GBと減っています。

また、Physical Size Used by Snapshot Copiesが4.82GBから3.53GBとSnapshotの物理的な使用量が削減されています。

NFSクライアントからvol1_dst_dstの使用量を確認

vol1_dst_dstをNFSでマウントして、どのようなサイズでレポートされるのか確認します。

まず、vol1_dst_dst/vol1_dst_dstにマウントします。

::*> volume mount -vserver svm3 -volume vol1_dst_dst -junction-path /vol1_dst_dst
Queued private job: 32

NFSクライアントからこちらのボリュームをマウントします。


$ sudo mount -t nfs svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst /mnt/fsxn/vol1_dst_dst/

$ ls -lh /mnt/fsxn/vol1_dst_dst/
total 9.1G
-rw-r--r--. 1 root root 1.0G Dec 22 01:46 1_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 07:53 ABCDE_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 05:28 a_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 06:55 abcde_padding_file
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file
-rw-r--r--. 1 root root 1.0G Dec 22 05:02 urandom_block_file2
-rw-r--r--. 1 root root 1.0G Dec 22 05:02 urandom_block_file2_copy
-rw-r--r--. 1 root root 1.0G Dec 22 01:47 urandom_block_file_copy
-rw-r--r--. 1 root root 1.0G Dec 22 06:41 urandom_block_file_copy2

$ df -hT -t nfs4
Filesystem                                                                           Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1         nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst nfs4  5.0G  4.2G  796M  85% /mnt/fsxn/vol1_dst_dst

使用量は4.2Gでレポートされています。物理的な使用量としてレポートされていることが分かります。

Snapshotをさらに削除

さらにSnasphotを削除します。対象はサイズが101MBのSnapshotsnapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528です。

::*> snapshot delete -vserver svm3 -volume vol1_dst_dst -snapshot snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528" for volume "vol1_dst_dst"
         in Vserver "svm3" ? {y|n}: y

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           288KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
11 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.8MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            4.03GB       0%
             Footprint in Performance Tier             4.17GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        50.02MB       0%
      Delayed Frees                                   145.3MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  4.22GB       0%

      Effective Total Footprint                        4.22GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 103.9GB
                               Total Physical Used: 11.04GB
                    Total Storage Efficiency Ratio: 9.41:1
Total Data Reduction Logical Used Without Snapshots: 26.09GB
Total Data Reduction Physical Used Without Snapshots: 8.78GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.97:1
Total Data Reduction Logical Used without snapshots and flexclones: 26.09GB
Total Data Reduction Physical Used without snapshots and flexclones: 8.78GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.97:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 66.66GB
Total Physical Used in FabricPool Performance Tier: 7.01GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 9.51:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 17.41GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.77GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.65:1
                Logical Space Used for All Volumes: 26.09GB
               Physical Space Used for All Volumes: 15.22GB
               Space Saved by Volume Deduplication: 10.87GB
Space Saved by Volume Deduplication and pattern detection: 10.87GB
                Volume Deduplication Savings ratio: 1.71:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.71:1
               Logical Space Used by the Aggregate: 16.74GB
              Physical Space Used by the Aggregate: 11.04GB
           Space Saved by Aggregate Data Reduction: 5.70GB
                 Aggregate Data Reduction SE Ratio: 1.52:1
              Logical Size Used by Snapshot Copies: 77.78GB
             Physical Size Used by Snapshot Copies: 3.43GB
              Snapshot Volume Data Reduction Ratio: 22.68:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 22.68:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

logical-usedが9.24GBから9.14GBと減っています。

NFSクライアントからも使用量を確認します。

$ df -hT -t nfs4
Filesystem                                                                           Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1         nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst nfs4  5.0G  4.1G  897M  83% /mnt/fsxn/vol1_dst_dst

100MBほど減っていますね。

3日弱何もせずに放置してみます。この状態でボリューム、aggregate、Snapshotの情報を確認します。

b6f9b00824a187::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
3 entries were displayed.

::*> volume show-footprint -volume vol1_dst_dst


      Vserver : svm3
      Volume  : vol1_dst_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            4.03GB       0%
             Footprint in Performance Tier             4.03GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        50.02MB       0%
      Delayed Frees                                    1.34MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  4.08GB       0%

      Footprint Data Reduction                        641.0MB       0%
           Auto Adaptive Compression                  641.0MB       0%
      Effective Total Footprint                        3.45GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 103.8GB
                               Total Physical Used: 11.52GB
                    Total Storage Efficiency Ratio: 9.02:1
Total Data Reduction Logical Used Without Snapshots: 25.94GB
Total Data Reduction Physical Used Without Snapshots: 9.19GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.82:1
Total Data Reduction Logical Used without snapshots and flexclones: 25.94GB
Total Data Reduction Physical Used without snapshots and flexclones: 9.19GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.82:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 66.80GB
Total Physical Used in FabricPool Performance Tier: 7.67GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 8.71:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 17.42GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 5.35GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.25:1
                Logical Space Used for All Volumes: 25.94GB
               Physical Space Used for All Volumes: 15.07GB
               Space Saved by Volume Deduplication: 10.87GB
Space Saved by Volume Deduplication and pattern detection: 10.87GB
                Volume Deduplication Savings ratio: 1.72:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.72:1
               Logical Space Used by the Aggregate: 16.97GB
              Physical Space Used by the Aggregate: 11.52GB
           Space Saved by Aggregate Data Reduction: 5.45GB
                 Aggregate Data Reduction SE Ratio: 1.47:1
              Logical Size Used by Snapshot Copies: 77.89GB
             Physical Size Used by Snapshot Copies: 3.43GB
              Snapshot Volume Data Reduction Ratio: 22.71:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 22.71:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
11 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst_dst -instance

                                                                Volume: vol1_dst_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 760
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 12881
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%

volume show-footprintAuto Adaptive Compressionが641.0MBと表示されていますね。ただし、Space Saved by Aggregate Data Reductionは5.70GBから5.45GBとむしろ減っていました。

カスケードSnapMirrorでStorage Efficinecyを維持できる条件を整理する

svm と svm2 のボリューム間のSnapMirror initialize

カスケードSnapMirrorでStorage Efficinecyを維持できる条件を整理したいと思います。

先ほどの検証ではvol1 -> vol1_dst -> vol1_dst_dstとカスケードSnapMirrorを2回繰り返すと、vol1_dstで実行したStorage Efficiencyの効果をvol1_dst_dstで維持することができました。

ここでvol1_dst -> vol1_dst_dstのみ転送した場合の挙動が気になりました。

まず、svm と svm2 のボリューム間のSnapMirror relationshipのinitializeを行います。

転送元のボリュームはvol1で、転送先のボリュームはvol1_dst2と新しいものを用意します。

::*> snapmirror protect -path-list svm:vol1 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize true -support-tiering true -tiering-policy none -destination-volume-suffix _dst2
[Job 106] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1".

::*> volume show -vserver svm2
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm2      svm2_root    aggr1        online     RW          1GB    967.3MB    0%
svm2      vol1_dst     aggr1        online     DP       7.54GB     1.10GB   84%
svm2      vol1_dst2    aggr1        online     DP       9.21GB     7.88GB    9%
3 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Uninitialized
                                      Transferring   1.08GB    true    12/25 05:04:32
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
3 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Finalizing     3.07GB    true    12/25 05:07:45
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
3 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
3 entries were displayed.

::*> snapmirror show -destination-path svm2:vol1_dst2

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst2
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst2
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                    Newest Snapshot Timestamp: 12/25 05:04:06
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                  Exported Snapshot Timestamp: 12/25 05:04:06
                                      Healthy: true
                              Relationship ID: 078ecbbc-a2e3-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 3.07GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:50
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/25 05:07:47
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:4:19
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 1
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 9057885176
               Total Transfer Time in Seconds: 221
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

3.07GB転送されました。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
4 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst2 -instance

                                                                Volume: vol1_dst2
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 255
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.97GB             45%                        3.01GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               10.42GB
                    1.58GB    10.42GB         9.90GB  8.32GB 84%          774.5MB            8%                         8.24GB              9.04GB       91%                  -                 9.04GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
4 entries were displayed.

::*> volume show-footprint -volume vol1_dst2


      Vserver : svm2
      Volume  : vol1_dst2

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.32GB       1%
             Footprint in Performance Tier             8.41GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        64.09MB       0%
      Delayed Frees                                   96.16MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.48GB       1%

      Effective Total Footprint                        8.48GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 160.2GB
                               Total Physical Used: 19.02GB
                    Total Storage Efficiency Ratio: 8.43:1
Total Data Reduction Logical Used Without Snapshots: 35.01GB
Total Data Reduction Physical Used Without Snapshots: 16.35GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.14:1
Total Data Reduction Logical Used without snapshots and flexclones: 35.01GB
Total Data Reduction Physical Used without snapshots and flexclones: 16.35GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.14:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 114.3GB
Total Physical Used in FabricPool Performance Tier: 15.20GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 7.52:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 26.50GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 12.55GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.11:1
                Logical Space Used for All Volumes: 35.01GB
               Physical Space Used for All Volumes: 23.38GB
               Space Saved by Volume Deduplication: 11.63GB
Space Saved by Volume Deduplication and pattern detection: 11.63GB
                Volume Deduplication Savings ratio: 1.50:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.50:1
               Logical Space Used by the Aggregate: 24.47GB
              Physical Space Used by the Aggregate: 19.02GB
           Space Saved by Aggregate Data Reduction: 5.45GB
                 Aggregate Data Reduction SE Ratio: 1.29:1
              Logical Size Used by Snapshot Copies: 125.2GB
             Physical Size Used by Snapshot Copies: 3.43GB
              Snapshot Volume Data Reduction Ratio: 36.48:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 36.48:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           144KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         79.82MB     1%    2%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           152KB     0%    0%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
17 entries were displayed.

転送先のボリュームでは774.5MBほど重複排除がされています。

また、Space Saved by Aggregate Data Reductionも変わりありませんでした。

vol1_dst2 でStorage Efficiencyを実行

vol1_dst2でStorage Efficiencyを実行します。

::*> volume efficiency on -vserver svm2 -volume vol1_dst2
Efficiency for volume "vol1_dst2" of Vserver "svm2" is enabled.

::*> volume efficiency start -vserver svm2 -volume vol1_dst2 -scan-old-data

Warning: This operation scans all of the data in volume "vol1_dst2" of Vserver "svm2". It might take a significant time, and degrade performance during that time.
Do you want to continue? {y|n}: y
The efficiency operation for volume "vol1_dst2" of Vserver "svm2" has started.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 69:08:19 1.72GB       5%              8.20MB         9.14GB
svm2    vol1_dst
               Enabled Idle for 00:01:28 12.19GB      0%              120KB          8.08GB
svm2    vol1_dst2
               Enabled 1757184 KB Scanned
                                         0B           0%              0B             9.08GB
svm3    vol1_dst_dst
               Enabled Idle for 69:43:32 83.62MB      1%              900KB          9.07GB
4 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 69:09:10 1.72GB       5%              8.20MB         9.14GB
svm2    vol1_dst
               Enabled Idle for 00:02:19 12.19GB      0%              120KB          8.08GB
svm2    vol1_dst2
               Enabled 15507456 KB Scanned
                                         0B           0%              0B             9.08GB
svm3    vol1_dst_dst
               Enabled Idle for 69:44:23 83.62MB      1%              900KB          9.07GB
4 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 69:12:07 1.72GB       5%              8.20MB         9.14GB
svm2    vol1_dst
               Enabled Idle for 00:05:16 12.19GB      0%              120KB          8.08GB
svm2    vol1_dst2
               Enabled 14701036 KB (91%) Done
                                         0B           0%              3.69MB         9.08GB
svm3    vol1_dst_dst
               Enabled Idle for 69:47:20 83.62MB      1%              900KB          9.07GB
4 entries were displayed.

::*> volume efficiency show -volume vol1* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size
vserver volume state   progress          last-op-size changelog-usage changelog-size logical-data-size
------- ------ ------- ----------------- ------------ --------------- -------------- -----------------
svm     vol1   Enabled Idle for 69:12:51 1.72GB       5%              8.20MB         9.14GB
svm2    vol1_dst
               Enabled Idle for 00:06:00 12.19GB      0%              120KB          8.08GB
svm2    vol1_dst2
               Enabled Idle for 00:00:16 17.24GB      0%              3.69MB         9.09GB
svm3    vol1_dst_dst
               Enabled Idle for 69:48:04 83.62MB      1%              900KB          9.07GB
4 entries were displayed.

17.24GBに対して処理が行われたようです。

ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
4 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst2 -instance

                                                                Volume: vol1_dst2
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 929
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
4 entries were displayed.

::*> volume show-footprint -volume vol1_dst2


      Vserver : svm2
      Volume  : vol1_dst2

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.39GB       1%
             Footprint in Performance Tier             8.46GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        64.09MB       0%
      Deduplication Metadata                          12.04MB       0%
           Deduplication                              12.04MB       0%
      Delayed Frees                                   69.02MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.53GB       1%

      Effective Total Footprint                        8.53GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 160.2GB
                               Total Physical Used: 19.20GB
                    Total Storage Efficiency Ratio: 8.35:1
Total Data Reduction Logical Used Without Snapshots: 35.02GB
Total Data Reduction Physical Used Without Snapshots: 13.54GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.59:1
Total Data Reduction Logical Used without snapshots and flexclones: 35.02GB
Total Data Reduction Physical Used without snapshots and flexclones: 13.54GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.59:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 114.3GB
Total Physical Used in FabricPool Performance Tier: 15.41GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 7.42:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 26.51GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 9.76GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.72:1
                Logical Space Used for All Volumes: 35.02GB
               Physical Space Used for All Volumes: 19.61GB
               Space Saved by Volume Deduplication: 15.41GB
Space Saved by Volume Deduplication and pattern detection: 15.41GB
                Volume Deduplication Savings ratio: 1.79:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.79:1
               Logical Space Used by the Aggregate: 24.65GB
              Physical Space Used by the Aggregate: 19.20GB
           Space Saved by Aggregate Data Reduction: 5.45GB
                 Aggregate Data Reduction SE Ratio: 1.28:1
              Logical Size Used by Snapshot Copies: 125.2GB
             Physical Size Used by Snapshot Copies: 7.27GB
              Snapshot Volume Data Reduction Ratio: 17.23:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 17.23:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           144KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
17 entries were displayed.

4.53GB重複排除されたようです。

vol1_dst2 でInactive data compressionを実行

vol1_dst2でInactive data compressionを実行します。

::*> volume efficiency inactive-data-compression start -vserver svm2 -volume vol1_dst2 -inactive-days 0
Inactive data compression scan started on volume "vol1_dst2" in Vserver "svm2"

::*> volume efficiency inactive-data-compression show -volume vol1_dst2 -instance

                                                                Volume: vol1_dst2
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 0%
                                                  Phase1 L1s Processed: 498
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 0
                                               Phase2 Blocks Processed: 0
                                     Number of Cold Blocks Encountered: 100576
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 3592
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 0
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -volume vol1_dst2 -instance

                                                                Volume: vol1_dst2
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: default
                                                              Progress: RUNNING
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: 83%
                                                  Phase1 L1s Processed: 9252
                                                    Phase1 Lns Skipped:
                                                                        L1:     0
                                                                        L2:     0
                                                                        L3:     0
                                                                        L4:     0
                                                                        L5:     0
                                                                        L6:     0
                                                                        L7:     0
                                                   Phase2 Total Blocks: 3281776
                                               Phase2 Blocks Processed: 2735104
                                     Number of Cold Blocks Encountered: 1275784
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 615760
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 605
             Time since Last Inactive Data Compression Scan ended(sec): 515
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 515
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume efficiency inactive-data-compression show -volume vol1_dst2 -instance

                                                                Volume: vol1_dst2
                                                               Vserver: svm2
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 2297288
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 1002592
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 550
             Time since Last Inactive Data Compression Scan ended(sec): 27
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 27
                           Average time for Cold Data Compression(sec): 523
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 50%

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
4 entries were displayed.

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
4 entries were displayed.

::*> volume show-footprint -volume vol1_dst2


      Vserver : svm2
      Volume  : vol1_dst2

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.39GB       1%
             Footprint in Performance Tier             8.47GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        64.09MB       0%
      Deduplication Metadata                          12.04MB       0%
           Deduplication                              12.04MB       0%
      Delayed Frees                                   80.61MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.54GB       1%

      Footprint Data Reduction                         3.87GB       0%
           Auto Adaptive Compression                   3.87GB       0%
      Effective Total Footprint                        4.67GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 160.1GB
                               Total Physical Used: 19.06GB
                    Total Storage Efficiency Ratio: 8.40:1
Total Data Reduction Logical Used Without Snapshots: 34.90GB
Total Data Reduction Physical Used Without Snapshots: 13.92GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.51:1
Total Data Reduction Logical Used without snapshots and flexclones: 34.90GB
Total Data Reduction Physical Used without snapshots and flexclones: 13.92GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.51:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 114.3GB
Total Physical Used in FabricPool Performance Tier: 15.38GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 7.44:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 26.51GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 10.26GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.59:1
                Logical Space Used for All Volumes: 34.90GB
               Physical Space Used for All Volumes: 19.49GB
               Space Saved by Volume Deduplication: 15.41GB
Space Saved by Volume Deduplication and pattern detection: 15.41GB
                Volume Deduplication Savings ratio: 1.79:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.79:1
               Logical Space Used by the Aggregate: 26.96GB
              Physical Space Used by the Aggregate: 19.06GB
           Space Saved by Aggregate Data Reduction: 7.90GB
                 Aggregate Data Reduction SE Ratio: 1.41:1
              Logical Size Used by Snapshot Copies: 125.2GB
             Physical Size Used by Snapshot Copies: 7.27GB
              Snapshot Volume Data Reduction Ratio: 17.23:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 17.23:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           144KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
svm3     vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
17 entries were displayed.

Space Saved by Aggregate Data Reductionが5.45GBから7.90GBと2.45GBほど増加しています。

また、Auto Adaptive Compressionは3.87GBと表示されるようになりました。

svm3 のボリュームの作成

svm3にボリュームvol1_dst2_dstを作成します。

::*> volume create -vserver svm3 -volume vol1_dst2_dst -aggregate aggr1 -state online -type DP -size 4GB -tiering-policy none
[Job 113] Job succeeded: Successful

::*> volume show -volume vol1* -fields type, autosize-mode, max-autosize
vserver volume max-autosize autosize-mode type
------- ------ ------------ ------------- ----
svm     vol1   19.20GB      off           RW
svm2    vol1_dst
               100TB        grow_shrink   DP
svm2    vol1_dst2
               100TB        grow_shrink   DP
svm3    vol1_dst2_dst
               100TB        grow_shrink   DP
svm3    vol1_dst_dst
               100TB        grow_shrink   RW
5 entries were displayed.

svm2 と svm3 のボリューム間のSnapMirror initialize

svm2 と svm3 のボリューム間のSnapMirror relationshipのinitializeを行います。

まず、SnapMirror relasionshipの作成をします。

::*> snapmirror create -source-path svm2:vol1_dst2 -destination-vserver svm3 -destination-volume vol1_dst2_dst -policy MirrorAllSnapshots
Operation succeeded: snapmirror create for the relationship with destination "svm3:vol1_dst2_dst".

次にinitializeを行います。

::*> snapmirror initialize -destination-path svm3:vol1_dst2_dst -source-path svm2:vol1_dst2
Operation is queued: snapmirror initialize of destination "svm3:vol1_dst2_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Uninitialized
                                      Transferring   0B        true    12/25 05:45:42
4 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Uninitialized
                                      Transferring   1.33GB    true    12/25 05:45:50
4 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Snapmirrored
                                      Transferring   1.31GB    true    12/25 05:46:41
4 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Snapmirrored
                                      Idle           -         true    -
4 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst2_dst

                                  Source Path: svm2:vol1_dst2
                               Source Cluster: -
                               Source Vserver: svm2
                                Source Volume: vol1_dst2
                             Destination Path: svm3:vol1_dst2_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst2_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                    Newest Snapshot Timestamp: 12/25 05:04:06
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                  Exported Snapshot Timestamp: 12/25 05:04:06
                                      Healthy: true
                              Relationship ID: cb1dc5b5-a2e8-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 1.59GB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:38
                           Last Transfer From: svm2:vol1_dst2
                  Last Transfer End Timestamp: 12/25 05:46:58
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:43:37
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 1
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 5891022441
               Total Transfer Time in Seconds: 76
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst2_dst
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
5 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst2_dst -instance

                                                                Volume: vol1_dst2_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 134
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               9.88GB
                    1.55GB    9.88GB          9.88GB  8.32GB 84%          774.5MB            8%                         8.24GB              9.04GB       92%                  -                 9.04GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.32GB       1%
             Footprint in Performance Tier             8.42GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        61.31MB       0%
      Delayed Frees                                   101.0MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.48GB       1%

      Footprint Data Reduction                         3.71GB       0%
           Auto Adaptive Compression                   3.71GB       0%
      Effective Total Footprint                        4.77GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 207.4GB
                               Total Physical Used: 23.04GB
                    Total Storage Efficiency Ratio: 9.00:1
Total Data Reduction Logical Used Without Snapshots: 43.97GB
Total Data Reduction Physical Used Without Snapshots: 17.99GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.44:1
Total Data Reduction Logical Used without snapshots and flexclones: 43.97GB
Total Data Reduction Physical Used without snapshots and flexclones: 17.99GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.44:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 161.6GB
Total Physical Used in FabricPool Performance Tier: 19.38GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 8.34:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.59GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 14.35GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.48:1
                Logical Space Used for All Volumes: 43.97GB
               Physical Space Used for All Volumes: 27.80GB
               Space Saved by Volume Deduplication: 16.17GB
Space Saved by Volume Deduplication and pattern detection: 16.17GB
                Volume Deduplication Savings ratio: 1.58:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.58:1
               Logical Space Used by the Aggregate: 33.19GB
              Physical Space Used by the Aggregate: 23.04GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.44:1
              Logical Size Used by Snapshot Copies: 163.4GB
             Physical Size Used by Snapshot Copies: 7.27GB
              Snapshot Volume Data Reduction Ratio: 22.48:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 22.48:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           144KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
svm3     vol1_dst2_dst
                  test.2023-12-22_0533                     536KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           380KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           380KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           300KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           152KB     0%    0%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
22 entries were displayed.

vol1_dst2で実行したStorage Efficiencyの重複排除はvol1_dst2_dstに引き継がれていませんが、Space Saved by Aggregate Data Reductionが7.90GBから10.15GBと2.25GBほど増加しています。

また、Auto Adaptive Compressionは3.71GBと表示されています。

そのため、aggregateレイヤーのデータ削減効果はしっかりと維持できていそうです。

svm2 と svm3 のボリューム間のSnapMirror update

svm2 と svm3 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm3:vol1_dst2_dst
Operation is queued: snapmirror update of destination "svm3:vol1_dst2_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Snapmirrored
                                      Idle           -         true    -
4 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst2_dst

                                  Source Path: svm2:vol1_dst2
                               Source Cluster: -
                               Source Vserver: svm2
                                Source Volume: vol1_dst2
                             Destination Path: svm3:vol1_dst2_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst2_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                    Newest Snapshot Timestamp: 12/25 05:04:06
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                  Exported Snapshot Timestamp: 12/25 05:04:06
                                      Healthy: true
                              Relationship ID: cb1dc5b5-a2e8-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 0B
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:0
                           Last Transfer From: svm2:vol1_dst2
                  Last Transfer End Timestamp: 12/25 05:52:44
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:49:1
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 2
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 5891022441
               Total Transfer Time in Seconds: 76
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

転送量は0Bです。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst2_dst
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
5 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst2_dst -instance

                                                                Volume: vol1_dst2_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 797
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               9.88GB
                    1.55GB    9.88GB          9.88GB  8.32GB 84%          774.5MB            8%                         8.24GB              9.04GB       92%                  -                 9.04GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.32GB       1%
             Footprint in Performance Tier             8.41GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        61.31MB       0%
      Delayed Frees                                   86.66MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.46GB       1%

      Footprint Data Reduction                         3.70GB       0%
           Auto Adaptive Compression                   3.70GB       0%
      Effective Total Footprint                        4.76GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 207.4GB
                               Total Physical Used: 22.45GB
                    Total Storage Efficiency Ratio: 9.24:1
Total Data Reduction Logical Used Without Snapshots: 43.97GB
Total Data Reduction Physical Used Without Snapshots: 17.44GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.52:1
Total Data Reduction Logical Used without snapshots and flexclones: 43.97GB
Total Data Reduction Physical Used without snapshots and flexclones: 17.44GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.52:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 161.6GB
Total Physical Used in FabricPool Performance Tier: 18.78GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 8.60:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.59GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 13.80GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.58:1
                Logical Space Used for All Volumes: 43.97GB
               Physical Space Used for All Volumes: 27.80GB
               Space Saved by Volume Deduplication: 16.17GB
Space Saved by Volume Deduplication and pattern detection: 16.17GB
                Volume Deduplication Savings ratio: 1.58:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.58:1
               Logical Space Used by the Aggregate: 32.60GB
              Physical Space Used by the Aggregate: 22.45GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.45:1
              Logical Size Used by Snapshot Copies: 163.4GB
             Physical Size Used by Snapshot Copies: 7.27GB
              Snapshot Volume Data Reduction Ratio: 22.48:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 22.48:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           144KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
svm3     vol1_dst2_dst
                  test.2023-12-22_0533                     536KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           380KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           380KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           300KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           168KB     0%    0%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
22 entries were displayed.

重複排除量もaggregateのデータ削減量も変化ありません。

また、vol1_dst2にもvol1_dst2_dstにもSnapshotは追加されていません。

そのため、Storage Efficiencyによる重複排除の最終的な転送先に反映するためにはvol1 -> vol1_dst -> vol1_dst_dstとカスケードSnapMirrorを2回繰り返す必要があるようです。

svm と svm2 のボリューム間のSnapMirror update

svm と svm2 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm2:vol1_dst2
Operation is queued: snapmirror update of destination "svm2:vol1_dst2".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Snapmirrored
                                      Idle           -         true    -
4 entries were displayed.

::*> snapmirror show -destination-path svm2:vol1_dst2

                                  Source Path: svm:vol1
                               Source Cluster: -
                               Source Vserver: svm
                                Source Volume: vol1
                             Destination Path: svm2:vol1_dst2
                          Destination Cluster: -
                          Destination Vserver: svm2
                           Destination Volume: vol1_dst2
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm2
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                    Newest Snapshot Timestamp: 12/25 06:00:27
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                  Exported Snapshot Timestamp: 12/25 06:00:27
                                      Healthy: true
                              Relationship ID: 078ecbbc-a2e3-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 3.27KB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:2
                           Last Transfer From: svm:vol1
                  Last Transfer End Timestamp: 12/25 06:00:29
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:0:22
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 2
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 9057888528
               Total Transfer Time in Seconds: 223
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

転送量は3.27KBです。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst2_dst
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
5 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst2_dst -instance

                                                                Volume: vol1_dst2_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 878
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               9.88GB
                    1.55GB    9.88GB          9.88GB  8.32GB 84%          774.5MB            8%                         8.24GB              9.04GB       92%                  -                 9.04GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.32GB       1%
             Footprint in Performance Tier             8.41GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        61.31MB       0%
      Delayed Frees                                   86.66MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.46GB       1%

      Footprint Data Reduction                         3.70GB       0%
           Auto Adaptive Compression                   3.70GB       0%
      Effective Total Footprint                        4.76GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 225.6GB
                               Total Physical Used: 22.45GB
                    Total Storage Efficiency Ratio: 10.05:1
Total Data Reduction Logical Used Without Snapshots: 43.96GB
Total Data Reduction Physical Used Without Snapshots: 17.45GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.52:1
Total Data Reduction Logical Used without snapshots and flexclones: 43.96GB
Total Data Reduction Physical Used without snapshots and flexclones: 17.45GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.52:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 170.9GB
Total Physical Used in FabricPool Performance Tier: 18.79GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 9.10:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.59GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 13.80GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.58:1
                Logical Space Used for All Volumes: 43.96GB
               Physical Space Used for All Volumes: 27.80GB
               Space Saved by Volume Deduplication: 16.17GB
Space Saved by Volume Deduplication and pattern detection: 16.17GB
                Volume Deduplication Savings ratio: 1.58:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.58:1
               Logical Space Used by the Aggregate: 32.60GB
              Physical Space Used by the Aggregate: 22.45GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.45:1
              Logical Size Used by Snapshot Copies: 181.6GB
             Physical Size Used by Snapshot Copies: 7.27GB
              Snapshot Volume Data Reduction Ratio: 24.98:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 24.98:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           148KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           136KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           140KB     0%    0%
svm3     vol1_dst2_dst
                  test.2023-12-22_0533                     536KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           380KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           380KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           300KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           168KB     0%    0%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
24 entries were displayed.

vol1vol1_dst2にSnapshotが追加されていますね。

svm2 と svm3 のボリューム間のSnapMirror update

svm2 と svm3 のボリューム間のSnapMirrorの差分転送を行います。

::*> snapmirror update -destination-path svm3:vol1_dst2_dst
Operation is queued: snapmirror update of destination "svm3:vol1_dst2_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Snapmirrored
                                      Finalizing     70.43MB   true    12/25 06:01:34
4 entries were displayed.

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Snapmirrored
                                      Idle           -         true    -
4 entries were displayed.

::*> snapmirror show -destination-path svm3:vol1_dst2_dst

                                  Source Path: svm2:vol1_dst2
                               Source Cluster: -
                               Source Vserver: svm2
                                Source Volume: vol1_dst2
                             Destination Path: svm3:vol1_dst2_dst
                          Destination Cluster: -
                          Destination Vserver: svm3
                           Destination Volume: vol1_dst2_dst
                            Relationship Type: XDP
                      Relationship Group Type: none
                             Managing Vserver: svm3
                          SnapMirror Schedule: -
                       SnapMirror Policy Type: async-mirror
                            SnapMirror Policy: MirrorAllSnapshots
                                  Tries Limit: -
                            Throttle (KB/sec): unlimited
              Consistency Group Item Mappings: -
           Current Transfer Throttle (KB/sec): -
                                 Mirror State: Snapmirrored
                          Relationship Status: Idle
                      File Restore File Count: -
                       File Restore File List: -
                            Transfer Snapshot: -
                            Snapshot Progress: -
                               Total Progress: -
                    Network Compression Ratio: -
                          Snapshot Checkpoint: -
                              Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                    Newest Snapshot Timestamp: 12/25 06:00:27
                            Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                  Exported Snapshot Timestamp: 12/25 06:00:27
                                      Healthy: true
                              Relationship ID: cb1dc5b5-a2e8-11ee-981e-bdd56ead09c8
                          Source Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8
                     Destination Vserver UUID: 674509fa-a065-11ee-981e-bdd56ead09c8
                         Current Operation ID: -
                                Transfer Type: -
                               Transfer Error: -
                           Last Transfer Type: update
                          Last Transfer Error: -
                    Last Transfer Error Codes: -
                           Last Transfer Size: 70.43MB
      Last Transfer Network Compression Ratio: 1:1
                       Last Transfer Duration: 0:0:26
                           Last Transfer From: svm2:vol1_dst2
                  Last Transfer End Timestamp: 12/25 06:01:55
                             Unhealthy Reason: -
                        Progress Last Updated: -
                      Relationship Capability: 8.2 and above
                                     Lag Time: 0:1:39
                    Current Transfer Priority: -
                             SMTape Operation: -
                 Destination Volume Node Name: FsxId0ab6f9b00824a187c-01
                 Identity Preserve Vserver DR: -
                 Number of Successful Updates: 3
                     Number of Failed Updates: 0
                 Number of Successful Resyncs: 0
                     Number of Failed Resyncs: 0
                  Number of Successful Breaks: 0
                      Number of Failed Breaks: 0
                         Total Transfer Bytes: 5964877057
               Total Transfer Time in Seconds: 102
                Source Volume MSIDs Preserved: -
                                       OpMask: ffffffffffffffff
                       Is Auto Expand Enabled: -
          Percent Complete for Current Status: -

転送量は70.43MBでした。

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst2_dst
               Disabled
                       -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
5 entries were displayed.

::*> volume efficiency inactive-data-compression show -volume vol1_dst2_dst -instance

                                                                Volume: vol1_dst2_dst
                                                               Vserver: svm3
                                                            Is Enabled: true
                                                             Scan Mode: -
                                                              Progress: IDLE
                                                                Status: SUCCESS
                                                 Compression Algorithm: lzopro
                                                        Failure Reason: -
                                                          Total Blocks: -
                                                Total blocks Processed: -
                                                            Percentage: -
                                                  Phase1 L1s Processed: -
                                                    Phase1 Lns Skipped: -
                                                   Phase2 Total Blocks: -
                                               Phase2 Blocks Processed: -
                                     Number of Cold Blocks Encountered: 0
                                             Number of Repacked Blocks: 0
                                     Number of Compression Done Blocks: 0
                                              Number of Vol-Overwrites: 0
           Time since Last Inactive Data Compression Scan started(sec): 0
             Time since Last Inactive Data Compression Scan ended(sec): 953
Time since Last Successful Inactive Data Compression Scan started(sec): -
  Time since Last Successful Inactive Data Compression Scan ended(sec): 0
                           Average time for Cold Data Compression(sec): 0
                                                        Tuning Enabled: true
                                                             Threshold: 14
                                                 Threshold Upper Limit: 21
                                                 Threshold Lower Limit: 14
                                            Client Read history window: 14
                                        Incompressible Data Percentage: 0%


::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               10.00GB
                    1.63GB    10.00GB         10.00GB 8.37GB 83%          4.53GB             35%                        4.41GB              12.87GB      129%                 -                 9.04GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.37GB       1%
             Footprint in Performance Tier             8.47GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        61.92MB       0%
      Delayed Frees                                   100.2MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.53GB       1%

      Footprint Data Reduction                         3.73GB       0%
           Auto Adaptive Compression                   3.73GB       0%
      Effective Total Footprint                        4.80GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 234.6GB
                               Total Physical Used: 22.51GB
                    Total Storage Efficiency Ratio: 10.42:1
Total Data Reduction Logical Used Without Snapshots: 43.96GB
Total Data Reduction Physical Used Without Snapshots: 14.86GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.96:1
Total Data Reduction Logical Used without snapshots and flexclones: 43.96GB
Total Data Reduction Physical Used without snapshots and flexclones: 14.86GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.96:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 180.0GB
Total Physical Used in FabricPool Performance Tier: 18.85GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 9.55:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.59GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.22GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.17:1
                Logical Space Used for All Volumes: 43.96GB
               Physical Space Used for All Volumes: 24.02GB
               Space Saved by Volume Deduplication: 19.95GB
Space Saved by Volume Deduplication and pattern detection: 19.95GB
                Volume Deduplication Savings ratio: 1.83:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.83:1
               Logical Space Used by the Aggregate: 32.66GB
              Physical Space Used by the Aggregate: 22.51GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.45:1
              Logical Size Used by Snapshot Copies: 190.6GB
             Physical Size Used by Snapshot Copies: 11.09GB
              Snapshot Volume Data Reduction Ratio: 17.18:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 17.18:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 1

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           148KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           136KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           140KB     0%    0%
svm3     vol1_dst2_dst
                  test.2023-12-22_0533                     536KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           380KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           380KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           300KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    38%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           148KB     0%    0%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
25 entries were displayed.

vol1_dst2_dstで重複排除が効いていますね。

そのため、Storage Efficiencyによる重複排除の最終的な転送先に反映するためにはvol1 -> vol1_dst -> vol1_dst_dstとカスケードSnapMirrorを2回繰り返す必要があることが分かりました。

vol1_dst2_dst のSnapshotを削除する

vol1_dst2_dstのSnapshotを削除していってボリュームやaggregateの使用量の変化を確認します。

SnapMirrorをカットオーバーします。

::*> snapmirror quiesce -destination-path svm3:vol1_dst2_dst
Operation succeeded: snapmirror quiesce for destination "svm3:vol1_dst2_dst".

::*> snapmirror break -destination-path svm3:vol1_dst2_dst
Operation succeeded: snapmirror break for destination "svm3:vol1_dst2_dst".

::*> snapmirror show
                                                                       Progress
Source            Destination Mirror  Relationship   Total             Last
Path        Type  Path        State   Status         Progress  Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol1    XDP  svm2:vol1_dst
                              Snapmirrored
                                      Idle           -         true    -
                 svm2:vol1_dst2
                              Snapmirrored
                                      Idle           -         true    -
                 svm3:vol1_dst_dst
                              Broken-off
                                      Idle           -         true    -
svm2:vol1_dst2
            XDP  svm3:vol1_dst2_dst
                              Broken-off
                                      Idle           -         true    -
4 entries were displayed.

::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume,using-auto-adaptive-compression
vserver volume state   policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume
------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- -----------------------------------------
svm     vol1   Enabled auto   false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm2    vol1_dst2
               Enabled -      false       true               efficient               false         true            true                              true                            false
svm3    vol1_dst2_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
svm3    vol1_dst_dst
               Enabled auto   false       true               efficient               true          true            true                              true                            false
5 entries were displayed.

`vol1_dst2_dstをマウントします。

::*> volume mount -vserver svm3 -volume vol1_dst2_dst -junction-path /vol1_dst2_dst
Queued private job: 52

NFSクライアントからvol1_dst2_dstをマウントします。

$ sudo mkdir -p /mnt/fsxn/vol1_dst2_dst
2$ sudo mount -t nfs svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst2_dst /mnt/fsxn/vol1_dst2_dst

$ df -hT -t nfs4
Filesystem                                                                            Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1          nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst  nfs4  5.0G  4.1G  897M  83% /mnt/fsxn/vol1_dst_dst
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst2_dst nfs4   10G  8.4G  1.7G  84% /mnt/fsxn/vol1_dst2_dst

$ ls -l /mnt/fsxn/vol1_dst2_dst/.snapshot
total 24
drwxr-xr-x. 2 root root 4096 Dec 22 05:28 snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
drwxr-xr-x. 2 root root 4096 Dec 22 06:55 snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
drwxr-xr-x. 2 root root 4096 Dec 22 07:53 snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
drwxr-xr-x. 2 root root 4096 Dec 22 07:53 snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
drwxr-xr-x. 2 root root 4096 Dec 22 07:53 snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
drwxr-xr-x. 2 root root 4096 Dec 22 05:28 test.2023-12-22_0533

Snapshotを削除します。

::*> snapshot delete -vserver svm3 -volume vol1_dst2_dst -snapshot test.2023-12-22_0533

Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "test.2023-12-22_0533" for volume "vol1_dst2_dst" in Vserver "svm3" ? {y|n}: y

::*>
::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           148KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           136KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           140KB     0%    0%
svm3     vol1_dst2_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           380KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           380KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           300KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    38%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                         93.90MB     1%    2%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
24 entries were displayed.

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               10.00GB
                    1.62GB    10.00GB         10.00GB 8.37GB 83%          4.68GB             36%                        2.15GB              13.06GB      131%                 -                 9.14GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.37GB       1%
             Footprint in Performance Tier             8.46GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        61.92MB       0%
      Delayed Frees                                   91.20MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.52GB       1%

      Footprint Data Reduction                         3.73GB       0%
           Auto Adaptive Compression                   3.73GB       0%
      Effective Total Footprint                        4.80GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 228.6GB
                               Total Physical Used: 22.52GB
                    Total Storage Efficiency Ratio: 10.15:1
Total Data Reduction Logical Used Without Snapshots: 44.03GB
Total Data Reduction Physical Used Without Snapshots: 14.81GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.97:1
Total Data Reduction Logical Used without snapshots and flexclones: 44.03GB
Total Data Reduction Physical Used without snapshots and flexclones: 14.81GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.97:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 174.0GB
Total Physical Used in FabricPool Performance Tier: 18.86GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 9.23:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.65GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.16GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.19:1
                Logical Space Used for All Volumes: 44.03GB
               Physical Space Used for All Volumes: 23.93GB
               Space Saved by Volume Deduplication: 20.09GB
Space Saved by Volume Deduplication and pattern detection: 20.09GB
                Volume Deduplication Savings ratio: 1.84:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.84:1
               Logical Space Used by the Aggregate: 32.67GB
              Physical Space Used by the Aggregate: 22.52GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.45:1
              Logical Size Used by Snapshot Copies: 184.6GB
             Physical Size Used by Snapshot Copies: 11.19GB
              Snapshot Volume Data Reduction Ratio: 16.50:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 16.50:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

536KBのSnapshotを削除したからか特に変化はありません。

NFSクライアントからも使用量を確認します。

$ df -hT -t nfs4
Filesystem                                                                            Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1          nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst  nfs4  5.0G  4.1G  897M  83% /mnt/fsxn/vol1_dst_dst
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst2_dst nfs4   10G  8.4G  1.7G  84% /mnt/fsxn/vol1_dst2_dst

こちらも特に変化ありません。

300KBほどのSnapshotをまとめて削除してみます。

::*> snapshot delete -vserver svm3 -volume vol1_dst2_dst -snapshot snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528, snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900, snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507

Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528" for volume "vol1_dst2_dst" in Vserver "svm3"
         ? {y|n}: y

Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900" for volume "vol1_dst2_dst" in Vserver "svm3"
         ? {y|n}: y

Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507" for volume "vol1_dst2_dst" in Vserver "svm3"
         ? {y|n}: y
3 entries were acted on.

::*> snapshot show -volume vol1*                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           148KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           136KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           140KB     0%    0%
svm3     vol1_dst2_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    38%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                         93.90MB     1%    2%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
21 entries were displayed.

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               10.00GB
                    1.62GB    10.00GB         10.00GB 8.37GB 83%          4.68GB             36%                        2.15GB              13.05GB      131%                 -                 9.14GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            8.37GB       1%
             Footprint in Performance Tier             8.46GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                        61.92MB       0%
      Delayed Frees                                   93.18MB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  8.52GB       1%

      Footprint Data Reduction                         3.73GB       0%
           Auto Adaptive Compression                   3.73GB       0%
      Effective Total Footprint                        4.80GB       1%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 205.5GB
                               Total Physical Used: 22.52GB
                    Total Storage Efficiency Ratio: 9.13:1
Total Data Reduction Logical Used Without Snapshots: 44.03GB
Total Data Reduction Physical Used Without Snapshots: 14.81GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.97:1
Total Data Reduction Logical Used without snapshots and flexclones: 44.03GB
Total Data Reduction Physical Used without snapshots and flexclones: 14.81GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.97:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 150.9GB
Total Physical Used in FabricPool Performance Tier: 18.86GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 8.00:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.65GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.17GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.19:1
                Logical Space Used for All Volumes: 44.03GB
               Physical Space Used for All Volumes: 23.93GB
               Space Saved by Volume Deduplication: 20.09GB
Space Saved by Volume Deduplication and pattern detection: 20.09GB
                Volume Deduplication Savings ratio: 1.84:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.84:1
               Logical Space Used by the Aggregate: 32.67GB
              Physical Space Used by the Aggregate: 22.52GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.45:1
              Logical Size Used by Snapshot Copies: 161.5GB
             Physical Size Used by Snapshot Copies: 11.18GB
              Snapshot Volume Data Reduction Ratio: 14.44:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 14.44:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

やはり、特に変化はありませんね。

NFSクライアントから確認しても特に変化はありません。

$ df -hT -t nfs4
Filesystem                                                                            Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1          nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst  nfs4  5.0G  4.1G  897M  83% /mnt/fsxn/vol1_dst_dst
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst2_dst nfs4   10G  8.4G  1.7G  84% /mnt/fsxn/vol1_dst2_dst

3.83GBのSnapshotを削除します。

::*> snapshot delete -vserver svm3 -volume vol1_dst2_dst -snapshot snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406

Warning: Deleting a Snapshot copy permanently removes data that is stored only in that Snapshot copy. Are you sure you want to delete Snapshot copy "snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406" for volume "vol1_dst2_dst" in Vserver "svm3"
         ? {y|n}: y

::*> snapshot show -volume vol1*
                                                                 ---Blocks---
Vserver  Volume   Snapshot                                  Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm      vol1
                  test.2023-12-22_0533                     160KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         24.45MB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           312KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           292KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                           148KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           136KB     0%    0%
svm2     vol1_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_015623
                                                          1.91GB    25%   38%
                  test.2023-12-22_0533                   893.2MB    12%   22%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                         481.9MB     6%   13%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         82.71MB     1%    3%
         vol1_dst2
                  test.2023-12-22_0533                     608KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528
                                                           364KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                           420KB     0%    0%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                           280KB     0%    0%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406
                                                          3.83GB    39%   46%
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                           140KB     0%    0%
svm3     vol1_dst2_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027
                                                         94.08MB     2%    2%
         vol1_dst_dst
                  snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900
                                                         24.16MB     0%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_074801
                                                         41.57MB     1%    1%
                  snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507
                                                          4.96MB     0%    0%
20 entries were displayed.

Storage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。

::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available
vserver volume size available filesystem-size total   used   percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- -------------------------------------------
svm     vol1   16GB 6.85GB    16GB            15.20GB 8.35GB 54%          808.8MB            9%                         808.8MB             9.14GB       60%                  -                 9.14GB              -                                   -
svm2    vol1_dst
               7.54GB
                    1.10GB    7.54GB          7.16GB  6.06GB 84%          4.98GB             45%                        3.00GB              11.01GB      154%                 -                 8.05GB              0B                                  0%
svm2    vol1_dst2
               9.84GB
                    1.45GB    9.84GB          9.35GB  7.90GB 84%          4.53GB             36%                        4.10GB              12.40GB      133%                 -                 9.06GB              0B                                  0%
svm3    vol1_dst2_dst
               5.40GB
                    876.4MB   5.40GB          5.40GB  4.55GB 84%          4.68GB             51%                        2.15GB              9.23GB       171%                 -                 9.14GB              0B                                  0%
svm3    vol1_dst_dst
               4.90GB
                    896.4MB   4.90GB          4.90GB  4.03GB 82%          5.11GB             56%                        2.21GB              9.14GB       186%                 -                 9.07GB              0B                                  0%
5 entries were displayed.

::*> volume show-footprint -volume vol1_dst2_dst


      Vserver : svm3
      Volume  : vol1_dst2_dst

      Feature                                         Used       Used%
      --------------------------------             ----------    -----
      Volume Data Footprint                            4.55GB       1%
             Footprint in Performance Tier             7.42GB     100%
             Footprint in FSxFabricpoolObjectStore         0B       0%
      Volume Guarantee                                     0B       0%
      Flexible Volume Metadata                             0B       0%
      Delayed Frees                                    2.87GB       0%
      File Operation Metadata                             4KB       0%

      Total Footprint                                  7.42GB       1%

      Footprint Data Reduction                         3.27GB       0%
           Auto Adaptive Compression                   3.27GB       0%
      Effective Total Footprint                        4.15GB       0%

::*> aggr show-efficiency -instance

                             Name of the Aggregate: aggr1
                      Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01
Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 196.5GB
                               Total Physical Used: 19.67GB
                    Total Storage Efficiency Ratio: 9.99:1
Total Data Reduction Logical Used Without Snapshots: 44.03GB
Total Data Reduction Physical Used Without Snapshots: 14.81GB
Total Data Reduction Efficiency Ratio Without Snapshots: 2.97:1
Total Data Reduction Logical Used without snapshots and flexclones: 44.03GB
Total Data Reduction Physical Used without snapshots and flexclones: 14.81GB
Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.97:1
Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 141.8GB
Total Physical Used in FabricPool Performance Tier: 16.01GB
Total FabricPool Performance Tier Storage Efficiency Ratio: 8.86:1
Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.65GB
Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.17GB
Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.19:1
                Logical Space Used for All Volumes: 44.03GB
               Physical Space Used for All Volumes: 23.93GB
               Space Saved by Volume Deduplication: 20.09GB
Space Saved by Volume Deduplication and pattern detection: 20.09GB
                Volume Deduplication Savings ratio: 1.84:1
                 Space Saved by Volume Compression: 0B
                  Volume Compression Savings ratio: 1.00:1
      Space Saved by Inline Zero Pattern Detection: 0B
                    Volume Data Reduction SE Ratio: 1.84:1
               Logical Space Used by the Aggregate: 29.82GB
              Physical Space Used by the Aggregate: 19.67GB
           Space Saved by Aggregate Data Reduction: 10.15GB
                 Aggregate Data Reduction SE Ratio: 1.52:1
              Logical Size Used by Snapshot Copies: 152.5GB
             Physical Size Used by Snapshot Copies: 7.36GB
              Snapshot Volume Data Reduction Ratio: 20.72:1
            Logical Size Used by FlexClone Volumes: 0B
          Physical Sized Used by FlexClone Volumes: 0B
             FlexClone Volume Data Reduction Ratio: 1.00:1
Snapshot And FlexClone Volume Data Reduction SE Ratio: 20.72:1
                         Number of Volumes Offline: 0
                    Number of SIS Disabled Volumes: 3
         Number of SIS Change Log Disabled Volumes: 0

Snapshotを削除したことにより、以下のように各種値が変化していました。

  • used : 8.37GB -> 4.55GB
  • logical-used : 13.05GB -> 9.23GB
  • Total Physical Used : 22.52GB -> 19.67GB

3.85GBのSnapshotを削除したのに応じてボリューム、aggregateの物理使用量が減っていることが分かりますね。

NFSクライアントからも確認します。

$ df -hT -t nfs4
Filesystem                                                                            Type  Size  Used Avail Use% Mounted on
svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1          nfs4   16G  8.4G  6.9G  55% /mnt/fsxn/vol1
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst_dst  nfs4  5.0G  4.1G  897M  83% /mnt/fsxn/vol1_dst_dst
svm-00c880c6c6eb922ed.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1_dst2_dst nfs4  5.5G  4.6G  878M  85% /mnt/fsxn/vol1_dst2_dst

使用量が8.4Gから4.6Gとなっていました。

移行や二次バックアップに

Amazon FSx for NetApp ONTAPでカスケードSnapMirrorを試してみました。

移行や二次バックアップに役立ちそうですね。特にオリジンの操作を行いたくない時には使っていきたいです。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!

Share this article

facebook logohatena logotwitter logo

© Classmethod, Inc. All rights reserved.